I am running a Django server with Nginx and Gnicorn. This server will fetch regularly from git any updates in its guts, and apply migrations.
My question is, is there a way to get those migrations effective and usable without killing the script it is running?
I could do this way:
sudo service restart gunicorn
But then I would lose track of the script.
is it the only way? and I would need to save all the states of my script into an environment file before running it, and fetch those variables at the start of Django.
if so, where would be the best place to do so to restart my script once DJango is loaded?
You would want to do this from outside the context of gunicorn - it should be an external process, in which case you shouldn’t “lose track of the script”.
like doing this way?
I found from one of your posts on another topic
I don’t think I’d create this using a django-admin command, no - because the purpose of this script is to alter / replace the Django instance.
From what you’re describing here, I’m thinking of a bash script more along the lines of:
sudo service stop gunicorn
python manage.py migrate
sudo service start gunicorn
(Obviously with the directory names set to their proper values and any other necessary commands added.)
thank’s a lot, I will do some test on my side.
but if I need to run some stuff from django after it is reloaded, where inside Django should I put my script for it to run after it is loaded?
You’ll need to be more specific than that for me to provide an answer. Unfortunately, “some stuff” is too vague for me to address.
That’s a fair comment.
in fact, I would update the code, migrate then reload Django so that the new model is updated.
Then after reloaded, I need to fetch the new data that will fit in this model and save them. So I need somehow to run a function that will start right after Django is loaded back with the new model.
not sure if it is clearer than before.
This is closer, at least enough to start from.
You can, in that bash script, run any management commands. If you’re looking to load new data into your models, you could run
manage.py loaddata. If you needed to run an internal function within your project, you could create a custom management command that runs it, and run that command from your script.
The loaddata functions is really something to look into indeed.
But I have few questions, because in the examples I have seen, the pk (Id) is specified.
My first problem is, in my list of import I have some existing objects to be updated, and some to be created.
The ones to be updated are identifiable through another field (other than the pk) that is unique.
For the ones to be updated, I can preprocess the data before saving the file to find the specific pk, but is there a way to let “loaddata” do it by itself?
For the ones to be created, do I need to specify manually a pk? Or Manage.py (the database) will automatically generate them?
Second problem, I have some many to many relationship with a through table.
For the objects created with a new pk, how can I identify them of I don’t know what is the generated pk?
Loaddata probably isn’t going to do it for you in these situations. In this case, you’re probably just going to want to create a custom management command that builds your objects as needed from the data available.