HACKER Q&A
📣 doganugurlu

What's wrong/right with Postgres migrations?


Background:

- About 9 months ago we burned 2 separate days handling migrations on Supabase. Supabase CLI version and/or local Docker version change made it impossible for my teammate to bring up the db on their local. Migrations couldn't be applied. Fresh pull didn't work either. I thought this was a fluke.

- Working with a new team/stack (flask, alembic, postgres). Twice in the last month I had to change the `alembic_version` in the db just to get the migrations to apply. I think we found ourselves in this situation because we rewrote the history by having the new new migration point to a previous revision (HEAD^) instead of the last one (HEAD). Not sure what our motivation was.

- What am/are I/we doing wrong? - What's the right/robust way to do this? - How are teams of size >2 handling this? - Isn't this scary?


  👤 konha Accepted Answer ✓
> How are teams of size >2 handling this?

Directory with .sql files starting with a number. Each file contains a set of changes to the db. The db has a table with the number that was applied last. To migrate your db you check if you have a file with a number that is higher than the one in the db. Then you apply that file to your db. That’s it.

Sounds like you are working in a way that is not intended by your tool / framework.


👤 ianpurton
I use dbmate and for a deployment I package up the migration files into a docker container which runs and then applies the changes.

Firstly I use a devcontainer for development so I know my versions line up. dbmate uses .sql files which also makes things a lot easier.

You can see my setup here https://github.com/bionic-gpt/bionic-gpt

Have a look at the CONTRIBUTING.md to get an idea of the dev workflow.


👤 sethammons
> because we rewrote the history

Don't rewrite shared history.

As for how migrations are ran. Last place, each team/service had their own data store. Numbered sql files and forward compatible changes only. Sometimes this meant application code would have to write to two locations and in a later update change the read location.

Current gig, everything is managed by the django orm. Great when you have a single db; sucks to scale out to sharded tables, sharded db instances, and is a pain for migrating to new data stores.


👤 sergiotapia
I never experienced anything like this in Rails or Phoenix (Ecto) land.

The only time I had problems with database migrations were in javascript with Prisma. But that was because we had a local dev dataset, and the CEO had gone into past migration files and edited them manually. Prisma exploded every time and it was a pain to fix.

It sounds like your pains are also coming from people manually editing past migration files and committing them. Don't do that!


👤 ddorian43
You can merge heads on alembic, nothing special. You just need tests for running migrations and its "just" like code conflict when merging.