Making Deployments Less step-ful

Hi y’all,

I helped out with a Django Girls workshop in Tokyo yesterday. It all went really well thanks to both the straightforwardness of Django itself and the hard work of the DG team to make such good teaching materials!

There is still this thing that bugs me a lot though, deployment. Even as someone who has brought up a lot of sites, I feel like when you have vanilla Django, getting something deployed involves a lot of steps and a lot of places where you can make subtle mistakes.

Warning: this is more spitballing some ideas than anything.

Right now, if you want to get a website up and running on a remote server you need to:

  • set up a virtualenv over there/get Django etc installed
  • set up your WSGI server
  • set up static file serving
  • set up your WSGI->HTTP server (maybe not necessary with gunicorn)
  • do migrations and whatnot
  • also, set up domain stuff

I’m going to think about the usual “simple” case for a bit. Someone making a blog with Django.

They are probably well served with a gunicorn + nginx (really it would be nice for a WSGI addendum to allow for pointing to static files that don’t eat up a web worker but…). They probably are running on some sort of linux OS where you could easily get supervisor running.

It’s also likely that a simple sqlite file would be sufficient.

In a magical world, there would be a management command that you could run that would generate the right sort of configuration files to run your Django application. Maybe a first management command (makedeploymentconf) that you could run on your own machine, where you get prompted for some stuff. You would say “please use this folder path on the deployment machine to store anything I might need”

then a second script would be generated that, when run on the server, would:

  • set up a virtual env in the path provided in the first command
  • check that (for example) you have supervisor + nginx installed
  • offer to symlink the configuration scripts for you to those places (and have log files pointed into the folder you specified in the first command)

In an even more magical world, it would be cool if you could point something like supervisor to your django app folder, and some capabilities discovery

There is “just use Docker”, which I kinda dislike because it’s really heavy and opaque, especially once you have stuff working on your machine.

There is also stuff like “just use Heroku”, which is a bit more legitimate. Heroku has this library where it autoconfigures your settings file with one line. Bit magical but gets the job done.


Yesterday, one person (a designer who was learning Django to figure out stuff on the backend) was like “for my job I just send stuff via FTP, is that how I do it with this Django project?” I think if we can somehow make it the case that would be awesome. I think asking people to install something globally from a package repo usually is fine (virtualenvs are harder because you have to do a bunch of path management and the like).

I am motivated to figure out how we could improve this stuff, but don’t know where to direct this energy ATM. I would love to hear if anyone else here has any feelings/ideas in this space

To remove the static assets setup and need to run collectstatic, I normally use Whitenoise at the beginining of projects, using WHITENOISE_USE_FINDERS = True: https://whitenoise.readthedocs.io/en/stable/

“Just use Heroku” - or Divio ( https://www.divio.com/ - Django-specific PaaS) or any other PaaS, is what I normally recommend too. It’s the same ease as “just send stuff via FTP” but with more control over the steps. Yes they have the Rails-ish philosophy of “convention over configuration” but to be fair most applications don’t need to deviate too far from the normal path.

I think asking people to install something globally from a package repo usually is fine (virtualenvs are harder because you have to do a bunch of path management and the like).

This is normally not fine since many Linuxes use their globally installed Python for system tooling, so you don’t have so much control over the dependencies. Although it takes more steps, it’s best to always use a virtualenv on servers or docker images.

If you don’t want to go with a PaaS, there is mod-wsgi express. It’s a pip-installable Python package that provides a Django management command (python manage.py runmodwsgi) which runs a properly configured Apache2.

With that approach you replace the steps from “set up your WSGI server” to “set up your WSGI->HTTP” with “install mod-wsgi express” and run it.

1 Like

Ahh, sorry, what I meant by this is more with regard to system-level (i.e. apt-get). For example when you install postgres on most systems it will set up the service so that postgres is actually running all the time.

I think that’s part of the problem with initial setup (especially for beginners). You develop locally by having runserver, but deployment requires you to be able to have a program always be running. So asking people to install supervisor via their OS package manager will likely lead to less problems than trying to do that part in trying to manually confiure systemd or other init systems.

100% agreed on needing to use a virtual environment for running the actual Python, I was merely referring to the difficulty of getting virtual environments up and running (or rather, the number of ways in whic you can make a mistake).

Sadly my answer to this at the moment is “just use Docker” - which is not ideal, but it eliminates a lot of the variability between systems that lead to having to have so many steps (do you have rpm or apt-get? Do you have python 2 or 3? etc.)

The “just send things over FTP” model that e.g. PHP enjoyed in its heyday is lovely, but it sort of only happened because everything ran the same way. I’d probably just point beginners towards Heroku or a similar service at the moment, given that it offers a similar experience in level-of-effort.

I’m not sure if there’s something Django can do here - we have no control over the environments people run, and even if we shipped a command that set everything up, we’d have to maintain it to run on 6 or 7 different Linux versions and distros, which would be a massive pain. Things like apt-get making Postgres run are great, but that’s maintained by the Debian/Ubuntu team, not the Postgres team, so it makes writing that stuff more reasonable (as it’s targeted so precisely).

1 Like

I have been using pythonanywhere.com and have found them to be very good. There are very responsive to queries. So yeah - they are another option.

Thanks for piping in everyone!

Part of me agrees on this. There’s a lot of complexity going on. OTOH, if we don’t even really have a wishlist of what we want, people who might have control over the environment won’t have a clear path forward if they want to make stuff easier. Docker is a good example of this. The tool wasn’t made by Heroku or Amazon, but it was made (partly) to solve deployment issues and hosts could then hook into this.

And yeah, we can’t really capture everything. I do think it would be less of an issue to have a “blessed WSGI for simplement deployments”, though. And maybe a default configuration for supervisor.

There’s also upstream difficulties with Python packaging in general that isn’t really Django-specific, so… yeah. Not really something that we could directly solve. But having the wishlist would help that stuff as well.

Maybe I just need to experiment a bit with a third-party package (like what mod-wsgi is doing). It’s not like I need for stuff to be in core. But a go-like future where someone can pull in vanilla Django and then end up with a straightforward deployment would be very excellent (would maintaining an “official” Dockerfile make sense?)

I actually tried them through this process. It was very nice, I hit a couple issues though. I will try to get my feedback over to them.