Database Migration tool of choice?

Hi there! I’ve been enjoying building web applications in Erlang as it appeals to my minimalist taste.

One area I find lacking compared to other ecosystem’s is database migrations. I was wondering if other experienced Erlanger’s had any recommendations.

Ideally this tool wouldn’t be tied to a particular framework. However, any references are also appreciated. The tool doesn’t even have to be written in Erlang.


I just want to ask, how they do this? :joy:


The most relevant example I can think of is Ecto and how it is used with the Phoenix framework.

1 Like

For MySQL to Postgres migrations there are specific scripts that modify database dumps or do a direct migration between servers.

It all depends on database features you are using. Stored procedures, full text indices, and triggers will need to be done by hand.


Ah sorry my use of the word “migrations” is a bit misleading. I was thinking more along the lines of this feature with Ecto. Where one generates schemas and changes to the schemas.

Not so much migrating data from Postgres to Mysql.


Ah, I see that on that Ecto page they are using the word migration when they seem to be defining a database schema. That is quite confusing to me, wonder why they call it migration.

As for databases, I tend to just write direct SQL queries, of course parametrized. We are using a very thin interface layer to query the information_schema to decide if we need to add or remove columns. In Zotonic this is done in a module’s manage_schema function.

We use full text indices, JSON queries, and “interesting” indices, that are not understood by libraries as Ecto, which need to model themselves to some common subset of database features.

Having said all this, did you check out GitHub - ErlyORM/boss_db: BossDB: a sharded, caching, pooling, evented ORM for Erlang ?


Scripts to evolve a database schema over time are called migrations quite commonly. Most frameworks I’m aware of name them that way.


We are discussing this part quite alot with Nova how to make a model and ecto kind of tool to our web framework.

If you want to check latest look at erldb_v2 branch.

We also looked if it is possible in a smooth way to use ecto but haven’t really found a good way.

But right now we are experimenting how we could do migrations and database things. Any help are welcome.


I really like Diesel.

The website mostly talks about the ORM, but it has a migration tool that is entirely independant and is a single binary.

The only downside is that they dont currently ship precompiled binaries. Precompiled diesel_cli binaries for major operating systems · Issue #2379 · diesel-rs/diesel · GitHub

Failing that flyway is good, though as a JVM application it’s a bit of a faff to run, so I mostly use it within a container

And for tiny projects I just have a directory of SQL files and do something like this:

set -eu
for migration in $(ls migrations)
    echo "> $migration"
    pgexec "$database" "migrations/$migration/up.sql"

Ah, schema migration, not database migration.

Seeing the question I was thinking of database migration, which, quoting big G, is: Database migration is the process of migrating data from one or more source databases to one or more target databases by using a database migration service.

Maybe it is best to prefix all migrations with what is being migrated :slight_smile:

In Zotonic each module has an optional manage_schema function, which is called whenever the module is installed or updated. As there is often some complex initialization of new columns or tables, we perform the needed updates with SQL queries (using the information_schema to check for the presence of tables/columns/indices) inside a transaction.


Zotonic has a flexible datamodel with resources, categories and predicates. The sql side of things hardly ever change, while it is easy to add application specific categories, and define relations between your resources.

These changes to the categories and predicates are managed, allowing existing resources to get a new category or be deleted.

That is probably why we don’t have the need for tooling around this.

All zotonic specific of course, although the data modeling part easily extends itself beyond other applications than simple website.


For Rust projects, instead of diesel I tend to prefer sqlx. Migrations are just files following the <datetimestamp>_description.sql format in the projects ./migrations directory by default, then you can either manually run them with sqlx migrate run (or revert to revert them if you make down files too), or, as I do, you just toss in a static MIGRATOR: Migrator = sqlx::migrate!("./migrations"); somewhere in your project source code and call; to run it on the DB upon initial connection (and it bakes the migration scripts into the binary for those nice single-binary copy-and-run things).

However, you generally do both of the above steps, the latter for migrating the runtime database, the former for migrating the build-time database.

Yes, a build-time database.

However, you don’t need it to always exist, you can just make a build-time database and migrate it up (usually with sqlx database reset to both build it, migrate it, etc… uses the local .env file for connection information), and then you can use sqlx prepare to generate a json file of everything sqlx needs for its build time steps (and you can then commit then, only need to update it when migrations change or you write or change sql code in your source).

Now the reason for the build-time database is that sqlx is not an ORM, you literally write code like this:

		let result = sqlx::query!("SELECT username, x, y, power, range, health, color_index, vote FROM tanks")

This is just normal SQL for whatever database you are running it on (sqlite in this case, though I usually use postgres), and you can use all the optimizations and features and capabilities and anything you want for that database, no ORM-like restrictions, you have full power.

However, that wouldn’t be as… interesting as just using a normal driver directly, you’d have to do the conversions and parse out the row data and all manually, but not here. The type of result here is a <closure:...>::Record, a unique type, which in this case has the fields username, x, y, power, range, health, color_index, and vote, of the types that are in the actual database. I could cause a cast to something else by doing something like:

		let result = sqlx::query!("SELECT username, x AS \"x: u32\", y, power, range, health, color_index, vote FROM tanks")

Which in this case would cast whatever x is to u32 if it is castable to it, and if not (like if it’s a string, or it’s out of u32 range, then it will return an error), that’s about the extent of its ‘special’ work in the SQL syntax, just that extra decoration after the AS name via an embedded double-quotes and a colon to cast to the rust type (and of course a trait to put on your own custom types too).

To pass in data you use whatever the syntax is for the database you are using, so for sqlite it’s just ?, like:

		let result = sqlx::query!(
			"INSERT INTO users (username, nickname, password_hash) VALUES (?, ?, ?)",

You can use it in any place you could normally use it in sqlite, again no ORM-like restrictions, you have your full SQL power.

But what if you don’t want an unnameable Record as a result, well define your own structure and use it as the ‘as’ return type:

struct Country { country: String, count: i64 }

let countries = sqlx::query_as!(Country,
SELECT country, COUNT(*) as count
FROM users
GROUP BY country
WHERE organization = ?
    .fetch_all(&pool) // -> Vec<Country>

No trait needed as it literally just fills it in by name casting to the type of whatever is the field type in the struct if possible (or compile-time failure of course, no runtime failures).

Of course you know the SQL ‘works’ for how its written because sqlx actually compiles the sql using the chosen driver at compile-time (which is how it gets type information from it and more, which of course it can cache in the offline file via sqlx prepare).

And of course if you need just a single return value(s) and not a struct you can just use query_scalar! like so:

			let color_index = sqlx::query_scalar!("SELECT color_index FROM tanks WHERE username = ?", username)

And of course color_index here will the type of color_index from inside the database (which you could cast too of course).

And lastly, if you need to dynamically generate SQL at runtime then just use the same function like query_scalar or so but without the ! (so you call the function instead of the macro), they have all the same capabilities as well except you need to pass arguments via .bind(...) calls instead of inline and you need to pass the query_as type as a generic argument and it must implement an appropriate trait, you lose compile-time guarantees with those but you still have the full power otherwise. You can get streaming results, get results dumped as a vector (or into some other container), can get a forced one result, or an optional one, can just execute and get a row count result, among others.

And yes, of course it is fully async capable with built-in pooling (everything optional and can be done piecemeal if you so wish, but by default it’s async pooled), can start transactions, etc… etc…

This has truly become my favorite sql library among any language, been half-tempted to port it to Elixir as Elixir could do it as well…

/me really really loves compile-time checking of the SQL code like it is normal rust, get highlights in my IDE and all


I love sqlx, it’s such a clever and productive way to interact with a database. I fully intend to port it to Gleam.

I didn’t realise it had a migration tool, not sure how I missed that. I’ll have to check it out.
Alas, still no precompiled binaries.


It’s fairly new, only in the last couple of months.

It’s based on cargo, it’s a cargo addon, you run cargo sqlx migrate revert or whatever. Hence cargo install sqlx-cli, lol.

You don’t need the cli tool part of it at all if you don’t care about generating the offline cache.

1 Like

For Postgres I love GitHub - helium/psql-migration: A SQL migration script for Erlang which is based on the Diesel schema migration tool. Is as simple as it gets and the code is one Erlang file of ~300 loc. It is pretty similar to what Ecto does but writing raw SQL instead of a DSL. I personally love writing raw SQL files since they can be used outside the app you have them, they can be shared and editors understand them.


I feel kind of old fashioned (and just old) saying this but I just use idempotent SQL that’s applied en masse every time the application starts. I find it’s easiest to understand and manipulate. Part of that is driven by a heavy use of SQL functions, triggers, and so on; I try to offload any data futzing I can to the DB.

FWIW (it it might not be much), I have a small library nxo_db that I use to manage interactions with PostgreSQL.


Compile time checking of the database code?

Wow… Yes please!


I’ve been away for a week or two, but I’m really glad to see all these different responses. Too many for me to respond to!

I knew that clever Erlang people had some solutions but it can be difficult to find them googling at times. Hence why this forum is so great!

Thanks everyone


Yep, I absolutely adore it!

1 Like

I have liked and discuss its use in Adopting Erlang Kubernetes | Adopting Erlang