Django migrations on multiple kubernetes pods

I’m in the process of migrating my Django application from VMs to Kubernetes.

Currently I’m stuck with running migrations - when running the same application on multiple pods, can I also run migrate on every pod (e.g. init container)?
Is there a possibility that 1 pod starts migrating database, then second pod starts ALSO starts migrating partially migrated database or does Django use some kind of internal locks to determine if database migration is already ongoing ?

FWIW: Our database is Oracle


Our situation is somewhat similar, other than we don’t use Kubernetes. We solved it by creating a separate container that only runs migrate. (It’s the same image, just a different command.) That container only runs once.

Django does keep track of what migrations have been applied, but I’m not aware of any locks on the database side on a more global level. If the database doesn’t support “schema locks”, it would have to try and lock tables that don’t yet exist - or possibly acquire locks on tables that might have already been dropped.

Ahh, a hint - from the docs on Transactions on the Migrations page:

On databases that support DDL transactions (SQLite and PostgreSQL), all migration operations will run inside a single transaction by default. In contrast, if a database doesn’t support DDL transactions (e.g. MySQL, Oracle) then all operations will run without a transaction.

On the surface I’d say it doesn’t look good, but I’m no expert on migrations.

I too am unsure if it’s safe for concurrency. However I can suggest using a Job to kick off migrations separate from your deployment pods, as @KenWhitesell suggested.

(Side note: This topic more appropriate belongs in the “Using Django” category, not the “Django Internals” category. You can change it by clicking on the pencil next to the title and selecting the proper category.)

1 Like