How to manage multiple Python virtual environments in production?

I have a couple of dozen websites with Nginx/uWSGI. I use pipenv to manage the virtual environments. I have the uWSGI file for each site pointed to the virtual env for that site. Setup is quick and easy, but what about updating the venvs? I’ve been deferring that detail knowing that I would have to cross that bridge at some point. pipenv does make it easy - I could just do pipenv update on the server for each venv and then git push -u origin master back to the repo. I think that these two statements could be done in a script to speed the venv updates, but what is standard practice? Since pipenv has the Pipfile.lock, it lets me manage exact versions and so the pipenv documentation certainly sounds like venvs are used for not only development but also for production. I would be very grateful for any thoughts on this subject.

I use plain pip and pip-compile from https://pypi.org/project/pip-tools/

Updating the requirements on your production server isn’t really the best way to do it. It’s better to compile your dependencies ahead of time and only pip install on your server. It’s also best to rebuild your virtualenvs on every deploy, to help with removing requirements as well as installing them. For a zero downtime approach, you’ll want to use a new directory each time.

2 Likes

Thank you. Very helpful.