Mnesia schema migration best practices when an application is upgraded and mnesia tables need to change?

My question is what is the best practice or your experience when an application is upgraded and mnesia tables need to change?

Do you have some OTP process like gen_server that is there only for this purpose(code_change handler) or do you use some other method instead?

3 Likes

For mnesia tables on a single node we add a step in the application callback module start/2 which uses mnesia:table_info(Table, arity) to check the table version. If it’s the old version it uses mnesia:transform_table/3 to upgrade it. This is of course all done before the application finishes starting.

For distributed applications you need to stop the cluster and start one node first, to do the upgrade. If you need to do an in service upgrade you need to do considerably more coordination.

Luckily OTP provides the tools to do live upgrades. An application upgrade file (.appup) defines how an application is upgraded in a running system. You may have all the nodes of a cluster synched during the upgrade so you can provide code to do whatever is required, like suspend use of a table while it is transformed.

4 Likes

BTW: Is anyone else inclined to create tables as key value pairs where the value is a map()?:

mnesia:create_table(foo, [])
mnesia:write({foo, #{bar => 42}) 

This avoids the problem above, you may add or delete keys with no change to the schema. You would need to use record fields for any secondary indexes. You can use ets:match_spec() effectively with maps.

2 Likes