Correct practice for importing third party modules

Context: I am creating a poker app to teach myself Django. I would like to use gamble · PyPI for my deck of cards. (Generally, my question applies to importing something like numpy or scipy, too.)

Do I simply create a virtual environment in the root directory, activate it, and then pip install whatever I want to use (and keep track in requirements.txt)? Would I pip install Django into each separate project that I create?

My confusion stems from the fact that currently I have a folder called django that has a virtual environment inside it and somewhere in my installation and tutorial process (I could not find this instruction looking back) I think I was told to have my virtual environment outside of my project which has things like Django and psycopg2 (for POSTGRE) pip installed. But, it doesn’t make sense to have two different django projects that use the same virtual environment. Perhaps I am supposed to have a separate virtual environment for each project that is also outside of the root? Or maybe I remember incorrectly and I was never told to put my virtual environment outside of the project.

My apologies if I am overthinking and this is a question I should not have. I’m starting to wonder if I have misunderstandings about virtual environments in general.

This is one of those highly opinionated questions in which you’ll get n+1 answers for every n people you ask.

So I’ll outline what I do. Off my home, I have two main directories involved, ve and git. The ve directory is where I have my virtual environments created. The git directory is where all my projects live.

I keep my environments separate from my projects for 2 reasons -

  1. I like to check how my projects behave when I upgrade components. So if I’m upgrading Django, I’ll create a new virtual environment with the new version and activate it to test my project. (This alone provides sufficient reason to keep projects separated from the virtual environments in which they run.)

  2. I do share virtual environments between projects. I do a fair amount of work on small, very limited sites, that all share the same fundamental libraries. The differences between them aren’t enough to justify new virtual envs just for those differences.

Ken

1 Like

Here’s another of the n+1 answers.

I have one virtual environment per project, even if they’re small, because sometimes they need deifferent requirements.

I use Python’s built-in venv module to create them, because it’s there out of the box. Typically I have the virtualenv in a folder called venv next to manage.py (and in .gitignore to avoid installing them).

You can put them elsewhere, and I used to do that with a similar setup to Ken. But after tutoring some junior developers I thought it’s better to keep virtual environments right next to the code, so one can dig up the contents easily, from inside an IDE file browser etc.

1 Like

Just for my edification - you say you have a venv directory next to manage.py. Does that mean you have the actual virtual environment in a directory within that? (e.g. /project_root/venv/django) Or is venv the virtualenv root?

There is something to be said for keeping them “close” - I do see the appeal of that. Is that also how you deploy your projects, or is this a “development environment”-only layout?

Ken

venv is the actual virtualenv. python -m venv venv is the invocation I tend to use. It can lead to confusion with all the virtualenvs being called venv in the prompt. That can be customized by adding the --prompt foobar argument, but I also tend to treat terminal tabs as one-use.

Yeah I use this is in production too, if the app is downloaded with the manage.py at /opt/myapp/manage.py , it’ll be /opt/myapp/venv. But mostly I find myself pushing people to Heroku or other managed platforms these days, where there’s not really a concern about if there even is a production virtualenv, let alone where it is.