Copying Old Site to Localhost Isn't Working

I’ve inherited a Django development project and am having A LOT of trouble getting it to work on my local Ubuntu 18.04 machine. The project was started in 2013 and has gone through bursts of development by various people.
I need to copy it to my local machine before I start cleaning it up, upgrading and deleting all the old sql dumps that seem to be scattered everywhere amongst other obvious clutter.
I need the working site as a reference so i don’t want to start messing around with it on the current server.

I am pretty new to Django and only recently completed one of those 4 hour YouTube tutorials. I think I could probably start coding a fresh site and would have some idea about deploying it onto a web server but getting an existing site with out of date software to run on my local machine is a whole other story.

I am trying to decide on the best way forward…

  1. Do I keep trying to get it working on my local machine by installing old software and then upgrade it (a large amount of learning about old systems and the result might just be an old and out of date instance of the project)?
    OR
  2. Do I start from zero on a fresh install of django and then rebuild it using the existing site only as a reference (it wouldn’t instantly be working and I might have to learn about some things I would otherwise not have to touch but at least I’d end up with something that works and is up to date)?
    OR
  3. Is there a simple fix that would get the existing site to run on a new virtual environment (start with what exists and modify it as necessary)?

Option 3 is my ideal if its possible.

The project is hosted on Digital Ocean.
** Overall System Specs
OS = Ubuntu 14.04.1 LTS
Vhosts folder contains four folders - development, staging, production and a folder named after one of the previous developers.
All vhosts appear to be accessible as sub-domains.
I have copied all the files down from the www folder.
Python 2.7.6 - Current is 3.8.5
PostgreSQL 9.3.4 - Current is 12.3
Virtualenv 1.11.4 - Current is 20.0.28

** Sub-domains/folders in the vhosts folder
Development appears to be the most current version of the project. All others are so far behind as to not really be considered of value.
There are two virtual environment folders inside the development folder. One of them appears to be a copy of the same virtual environment that is in all other vhost folders. I can’t get “pip freeze” or “pip list” to work on them so not sure what is going on there. The other is likely to be the one most recently used. I couldn’t get “pip freeze” to work on it but I could get “pip list” to work. Most relevant specs listed below.
** Virtual Environment specs
Python 2.7.6 - Current is 3.8.5
Django 1.7 - Current is 3.0.8
psycopg2 2.4.5 - Current is 2.8.5
pip 1.5.4 - Current is 20.2

** Database Structure
There appears to be three databases named with a sequence similar to “Filename” “Filename1” and “Filename2”
I have downloaded them all one at a time using the command “pg_dump -U <source_database_name> -f <destination_filename>.sql”
Through a few headaches I managed to get the database to upload to my local Postgre installation.
I modified the settings.py file in the django project to be able to access the database. When attempting to run the development server, the errors appeared to switch after this happened.
Funnily enough, the database name in the settings.py file on Digital Ocean does not match any of the names of the databases yet the system runs. The username and password for the database do match. I don’t know what to make of that except to say that I had to create a database of same name to the one on Digital Ocean just to get the database dump to upload onto my local Postgre.

** Where the troubles are
I have activated the virtual environment that is with the project files. I have also tried creating a new virtual environment. Either way when I try “python manage.py runserver” I seem to be getting the following error:

django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: No module named psycopg2

When I try installing psycopg2 I get the following error:

Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: psycopg2 in /usr/lib/python3/dist-packages (2.8.5)

I’ve checked the site-packages folders in both the new and old virtual environment paths and they both seem to contain a psycopg2 folder.
On my local Ubuntu 18 system I appear to have folders for Pythong 2.7, 3, 3.6, 3.7 and 3.8 in the “/usr/lib/” folder.
I’ve checked for “dist-packages” folders in all of the python version folders and psycopg2 folder only exists in 3.

Just as a test I tried:
“python3 manage.py runserver”
I got the following error.

Traceback (most recent call last):
File “manage.py”, line 8, in
from django.core.management import execute_from_command_line
ModuleNotFoundError: No module named ‘django’

I should point out that I can get all newly created django projects to run with PostGreSQL as the defined database and their servers run and post the default page to a browser. Its just this imported site I am having a lot of trouble with.

If anyone can offer a solution it would be greatly appreciated.

You (potentially) are dealing with a number of different issues here.

If I were having to deal with this, I would create two separate virtual environments (“venv”). One would be build based on as much of the old system as possible, the other being the current target environment. (<opinion>If you’re not used to working with virtual environments, don’t do anything else until you are.</opinion>)

Create your venvs, then while each venv is active, use pip to install the proper set of packages for that environment. You can get a good jumpstart on this by running pip freeze on the current server in the current running environment to get the list of packages installed and the versions being used. If you save that output to a file, you can then use that file with a pip install -r <file name> command to install those versions of those packages.

Once you’ve gotten that done, then you should be able to set up your old project in the “old” venv and run it in the environment in which it normally runs.

Thanks Ken. Some very helpful suggestions there.

One of the issues I am having is that for some reason the command “pip freeze” doesn’t run on the current server. This leads me to the idea that I may need to update the version of pip running on the server before I can do anything else. Do you see any problem with that idea?

The command “pip list” works which is how I have gotten the information that I have.

I feel I am OK at working with virtual environments. Based on my learning so far, I can’t see if I am doing anything wrong with them. It occurs to me though that the Postgre installation is a machine based installation and specific back versions of Postgres are not able to be installed specifically within individual virtual environments. Also the version of virtualenv is machine level. I’ve proceeded as if this doesn’t matter. Please do correct me if I am wrong about this.

“doesn’t run” - error message or something else? If pip list works, you can always create the freeze file yourself. (It’s basically the same information but with “==” between the package name and version number)

Postgresql is system level, but the python libraries can be installed in the virtual environment. (I recommend using the psycopg2-binary library, it’s an “easier” install.)

Unless you’re doing something really odd in or with the database, I wouldn’t worry about running everything on the current version of PostgreSQL. However, you can run multiple instances of PostgreSQL if necessary - they end up listening on different ports. I just wouldn’t bother until I find a situation where it does matter.

(So if you do end up running two different PostgreSQL instances, you’ll need to know which instance is listening on which port, and modify your settings file accordingly.)

But whatever you do, do not just copy system libraries from your old system to your new one. Do the actual package installations to make sure that everything is set up correctly.