Package everything into a runnable archive?

This came up at work and I wasn’t super sure what the preferred artifact is to build out of a Django project.

Ideally I would have the project and all its dependencies wrapped into a single archive that I can then pass to a ASGI server as a parameter to run it. That would reduce the installation overhead on a host to python+uvicorn which sounds pretty great.

Is something like that possible? Anything I should track that goes into that direction?

There are a couple of practical difficulties in trying to do something like that - most of which deal with paths, and could be worked around with some creative scripting.

However, the biggest problem I can see is the close relationship between binary libraries and the version of Python (and the compiler used to build it) being run. That’s going to create dependencies between those two components that I envision being extremely difficult to manage.

Now, if you include both Python and uvicorn in your packaging mindset, what you’ve got is a solved problem - Containers! You end up with a single artifact (the container) which can then be copied / run to any environment supporting it.

OK. I was looking for a way to do a python deployment without containerization.

If the only limitation is the Python (minor?) version, that is something that can be agreed on on both sides right? My Pipfile specfies the python version and installing that exact version on the running host should be fairly easy.

The thing I think that could be more of an issue is any package that relies on natively compiled extensions, but even there maybe you could work with fat libraries for macOS and Linux or something?

That’s actually what I was referring to when I wrote my response. Those compiled libraries are extremely sensitive to the environment in which they’re being run. Frequently, you need to compile those extensions having the “dev” versions of Python (and perhaps other libraries) available at the time of compilation.

Yes, it’s theoretically doable - but I can only believe it’s going to be really difficult to manage that at scale. (I would imagine that managing your upgrade cycle could be, uhhh, interesting.) The point is that you’re introducing a dependency between the runtime environment (Python and compiled libraries) and your code that doesn’t need to exist - creating just one more thing to worry about in your deployment environment.

For example, assume you use 3rd party library xyz. It’s not part of a typical / standard Python install. So you figure out how to include that in your deployment artifact, and you’re good to go. A new version of Python is released. Now, instead of just upgrading xyz on your server for that specific version of Python, you need to go back and upgrade all your deployment artifacts that use xyz - and keep them separate from your artifacts that get deployed to systems not using that newer version of python.
Ok, so you’ll just upgrade Python everywhere? Fine, then that implies that you’re going to upgrade all of your artifacts to work with the new version of Python across all your platforms. That’s introducing a lot more work (and the corresponding coordination efforts and risk) for a single upgrade. This also isn’t something that can be addressed with fat libraries, unless you’ve figured out how to build libraries for versions that don’t exist yet.

And that’s the problem that containers solve. It addresses the issue that your Python code doesn’t exist independently of the Python runtime and all the ancillary libraries typically used, by considering the entire suite of executables and libraries to be the single deployable unit.

I did find something like: GitHub - spotify/dh-virtualenv: Python virtualenvs in Debian packages

Which seems like it’s kinda in the same realm of what I’m asking for. But I guess you’re right and it doesn’t make a lot of sense to pursue this.

I don’t understand how any of this follows? A new version of Python is not something that I need to install in this thought experiment? Even if I did, it does not break all existing Python code (probably none at all).

We’re not talking about Python code - we’re talking about compiled extensions (such as numpy). New versions break compiled libraries - they need to be compiled for the right version. So let’s use numpy as an example. You’ve got two situations that you would be facing.

  1. You install the os-native-packaged numpy package - which is contrary to the original premise of a single-deployment artifact.
  2. You add numpy to your deployment artifact, in which case you’ve tied your artifact to the specific version of Python being run on that system.

Obviously, this may not be an issue for your particular situation. You may come up with something that works for you without any issues at all. It’s when you’re looking at this becoming a more general solution that these sorts of things crop up.

There are some tools that make this possible. As Ken has noted, there are a large number of caveats when going this route.

I deployed my last Django project to a Digital Ocean machine that I managed and used Shiv (from LinkedIn) to package my app together.

I no longer work on that project, but you’re welcome to look at the code to see what I did. Here’s a link to the script that built the Python package out of my Django project: https://github.com/mblayman/conductor/blob/master/package.sh

1 Like

That’s cool to see and nice that my question was not as far off base as I had feared.