How do you automate deploys to a physical server/VM?

One might say I’m coming from the Ruby world, and there you have mina and capistrano. With them you basically do:

$ mina deploy

and by the time it finishes, the new release is running on the server. Of course you have a script where you describe what’s to be done on the server.

They both provide the following directory layout:

.
├── current -> releases/2
├── releases
│   ├── 1
│   │   └── public
│   │       └── uploads -> ../../../shared/public/uploads
│   └── 2
│       └── public
│           └── uploads -> ../../../shared/public/uploads
└── shared
    └── public
        └── uploads

The shared directory stores non-code (user uploads, logs, node_modules, or something you want to share between releases (single copy)). With this, you can build (create) a new release on the server, then point current to the new release, and restart the web server.

So I created a fabfile.py:

from shlex import quote

from fabric.api import run, env, settings, cd, prefix

if not env.hosts:
    env.hosts = ['site@example.com']

def deploy(branch='master'):
    try:
        with settings(warn_only=True):
            deploy_lock_exists = run('test -e ~/deploy.lock')
        if deploy_lock_exists.succeeded:
            print('deploy.lock exists, exiting...')
            return
        run('touch ~/deploy.lock')

        run('mkdir -p site/releases site/shared/{node_modules,logs}')
        timestamp = run('date +%Y%m%d-%H%M%S')
        release_dir = 'site/releases/' + quote(timestamp)
        run('git clone -q -b ' + quote(branch)
            + ' git@github.com:x-yuri/site.git ' + quote(release_dir))
        with cd(release_dir):
            run('ln -s ../../shared/node_modules node_modules')
            run('ln -s ../../shared/logs logs')
            run('ln -s ../../../../shared/settings_local.py site/settings/local.py')
            run('ln -s ../../../shared/secretkey.txt site/secretkey.txt')
            run('ln -s ../../../shared/settings_secret.py site/settings_secret.py')

            run('~/site/env/bin/pip install -qr requirements.txt')
            run('yarn')
            with prefix('. ~/site/env/bin/activate'):
                with settings(warn_only=True):
                    secret_exists = run('test -e ~/site/shared/secretkey.txt')
                if not secret_exists.succeeded:
                    run("python -c '"
                        "from django.core.management.utils import get_random_secret_key;"
                        "print(get_random_secret_key());"
                    "' > ~/site/shared/secretkey.txt")

                run('./manage.sh migrate')

                run('./node_modules/.bin/webpack')
                run('./manage.sh collectstatic --no-input')
        with cd('site/releases'):
            run(r"ls | head -n -5 | xargs -rI{} -d'\n' rm -rf {}")
        with cd('site'):
            run('ln -nsf releases/"$(ls releases | tail -n 1)" current')
        restart_uwsgi()
    finally:
        run('rm -f ~/deploy.lock')

def restart_uwsgi():
    run('touch /etc/uwsgi.d/site.ini')

I could probably split it into more functions, but anyway comparing it to a similar mina config, I have to care about relinking current, linking shared directories and files, deploy locks, removing old releases. So I think I’m doing something wrong here. There must be a better way, and I hope you can point me in the right direction.

The information is pretty scarce on this. I probably saw a suggestion to build a wheel, and install it on the server. Let me be frank here, that seems weird. But anyways, how do I let it know where the data is? Environment variables? If possible, I’d like to have more specific instructions.

The other option I’m considering is using mina + docker. The mina’s config would basically be:

require 'mina/git'
require 'mina/deploy'

set :repository, 'git@github.com:x-yuri/site.git'
set :branch, ENV.fetch('BRANCH', 'master')
set :user, 'site'
set :domain, 'example.com'
set :deploy_to, '/home/site/app'

task :deploy do
  deploy do
    invoke :'git:clone'
    command 'docker-compose -p "$USER" -f docker-compose-production.yml build'
    invoke :'deploy:cleanup'
    on :launch do
      command 'docker-compose -p "$USER" -f docker-compose-production.yml up -d'
    end
  end
end

This basically tells mina to deliver the source code to the server, and do docker-compose build && docker-compose up. Plus I need Dockerfile and docker-compose.yml. That is probably the simplest way for me, but I’d like to know the options.

A tool I use and think is pretty tops is Dokku. Long story short, push your repo to your Dokku host and you’ll get a deployed Django site (or anything else for that matter, as long as it runs in Docker).

For example, if I want to push my development branch for deployment, I run:

git push dev_server development:master

Or if I want to push my master branch to prod:

git push prod master

For serving static files you have a couple of options. Whitenoise is a package which takes care of a lot of the work by serving static from within your app. Remember you’ll still have to run python manage.py collectstatic. If you commit your static to your git repo, then you can push it straight to Dokku and let Whitenoise handle the rest.

Dokku works only with Dockerfiles, not docker-compose. So if you wanted Django + Postgres then you would need an instance of Postgres running separately. Dokku has a plugin to run a Postgres instance for you.

There are quite a few good tutorials on Dokku and Django on the web. Here’s a reasonable comprehensive blog post. https://www.stavros.io/posts/deploy-django-dokku/

The learning curve for Dokku is short and after that it’s just fun and games (for me, at least)

Of course, this is just my opinion are there are other ways of doing automated deployments, but I think this is the easiest one if you’re on a budget. If money is no concern, try Heroku.

1 Like

Thanks @conor, that sounds interesting. Gotta give it a try. But on this server I already have a couple of docker containers running. Actually there’s nginx on the host that’s responsible for a couple of sites. For some domains requests are passed to nginx-proxy + letsencrypt-nginx-proxy-companion, that proxy them to another bunch of sites. And to make dokku work I need to proxy requests to dokku as well. Meaning, do you think installing dokku can break something?

Also, I’d like to learn about other options, to make a more educated guess. Like, what other approaches python developers take?

fabric seems like it’s missing a couple of features. I mean it’s naive to git clone (1st time), or git pull (2nd+ times) in one directory, build the project there, and restart the web server,isn’t it? It’ll probably interfere with the running site. So at least you need 2 directories (build and release). And you might want to keep a number of previous releases. It’s not like it’s too big of a deal. But it makes me think why that kind of behavior doesn’t come out of the box. Maybe such setup is rare for some reason.

I’m also curious about “build the wheel” approach, if anybody takes it. And generally, what approaches are common in the python world? Which ones you did take? Why it didn’t work out?

If you have Nginx already running on a server, then I would not start playing with Dokku as Dokku relies heavily on Nginx to do the proxying to the docker instances. I’d only recommend playing with Dokku on a system which isn’t already running Nginx for production purposes. Safety first!

On the subject of Nginx proxy and let’s encrypt, Dokku offers a Letsencrypt plugin which takes care of your certs. It’s quite nice and I use it. It will run the cert renewal process every two months if you ask it to. That basically means your Dokku application flow looks something like this:

NGINX -> TCP:443 -> Cert from LetsEncrypt -> Docker Container port 5000 (gunicorn) -> Your Django App.

I’m not familiar with nginx-proxy + letsencrypt-nginx-proxy-companion but from the brief glance I had, it very much looks like what Dokku is doing. From what I can tell, Dokku is just a neat, containerized solution which does the whole HTTP application flow thing nicely by using some smarts and git. I believe it is just a couple hundred lines of Bash script. You can achieve the exact same thing by hand, and to be honest that’s kind of how I’ve always done these sorts of deployments until I stumbled across Dokku.

Last thing I’ll say before everyone get’s tired of reading the work Dokku, is that it works very well for me, but I can’t say that it’s the right solution for everyone. Far from it. But if you do fancy giving it a whirl, do so on a fresh install.

The only other experience I have deploying Django is the good old fashioned way, and that is deploying one’s code via SCP or cloning/pulling from GIT and manually running gunicorn + apache/nginx. So I don’t think I can be of much help to you here and it’s probably best to get some input from others who have more experience doing it the way you mention.

Good luck with it and I’d love to hear what you come up with!

Yeah, I decided to not risk installing Dokku on this server this time, and go with the docker solution. Actually the project I’m working on right now is not a Django project. The fabfile above is from another project (Django project in this case). It was written for Fabric 1. So I rewrote it for Fabric 2:

from os.path import join
from shlex import quote as q

from invoke import Exit
from fabric import task

repo = 'GIT_URL'
host = 'USER@HOST'
domain = 'DOMAIN'
keep_releases = N

@task(hosts=[host])
def deploy(c):
    try:
        if c.run('test -e deploy.lock', warn=True):
            raise Exit('deploy.lock exists')
        c.run('touch deploy.lock')

        data = {
            'repo': q(repo),
            'domain': q(domain),
            'keep_releases': q(str(keep_releases)),
        }

        # create app/releases
        c.run('mkdir -p app/releases'.format(**data))

        timestamp = c.run('date +%Y%m%d-%H%M%S', hide='both')
        data['release_dir'] = join('app/releases', timestamp.stdout.strip())

        # clone the repository
        c.run('git clone {repo} {release_dir}'.format(**data))

        try:
            # create the REVISION file
            c.run(r'''
                cd {release_dir} \
                && git rev-parse HEAD > REVISION
            '''.format(**data))

            # create the .env file
            c.run(r'''
                VIRTUAL_HOST={domain} \
                && printf "
                    VIRTUAL_HOST=%s
                " "$VIRTUAL_HOST" > {release_dir}/.env
            '''.format(**data))

            # docker-compose pull && build && up
            c.run(r'''
                cd {release_dir} \
                && docker-compose -p "$USER" -f docker-compose-production.yml \
                    pull \
                && docker-compose -p "$USER" -f docker-compose-production.yml \
                    build \
                && docker-compose -p "$USER" -f docker-compose-production.yml \
                    up -d
            '''.format(**data))

        except:
            c.run('rm -rf {release_dir}'.format(**data))
            raise

        # remove old releases
        c.run(r'''
            ls app/releases \
                | head -n -{keep_releases} \
                | xargs -rI{{}} -d'\n' rm -rf app/releases/{{}}
        '''.format(**data))

        # symlink current
        c.run(r'''
            last_release=$(ls app/releases | tail -n 1) \
            && last_release_rel=$(realpath --relative-to=app app/releases/"$last_release") \
            && ln -nsf "$last_release_rel" app/current
        '''.format(**data))

    finally:
        c.run('rm -f deploy.lock')

But then I thought, do I really need to keep the old releases? I can always checkout another commit and do docker-compose up -d. So with this in mind:

from os.path import join
from shlex import quote as q

from invoke import Exit
from fabric import task

repo = 'GIT_URL'
host = 'USER@HOST'
domain = 'DOMAIN'

@task(hosts=[host])
def deploy(c):
    try:
        if c.run('test -e deploy.lock', warn=True):
            raise Exit('deploy.lock exists')
        c.run('touch deploy.lock')

        data = {
            'repo': q(repo),
            'domain': q(domain),
        }

        # clone/pull the repository
        c.run('''
            if [ -e app ]; then
                cd app \
                && git fetch \
                && git reset --hard origin/master
            else
                git clone {repo} app
            fi
        '''.format(**data))

        # create the REVISION file
        c.run(r'''
            cd app \
            && git rev-parse HEAD > REVISION
        ''')

        # create the .env file
        c.run(r'''
            VIRTUAL_HOST={domain} \
            && printf "
                VIRTUAL_HOST=%s
            " "$VIRTUAL_HOST" > app/.env
        '''.format(**data))

        # docker-compose pull && build && up
        c.run(r'''
            cd app \
            && docker-compose -p "$USER" -f docker-compose-production.yml \
                pull \
            && docker-compose -p "$USER" -f docker-compose-production.yml \
                build \
            && docker-compose -p "$USER" -f docker-compose-production.yml \
                up -d
        ''')

    finally:
        c.run('rm -f deploy.lock')

While I’m at it, I think I’ll share the docker files as well. You most likely don’t want to use them as is (they are for a bottle.py + bjoern app using face_recognition and opencv). But that might be a good start (hopefully).

For running locally:

docker-compose.yml:

version: '3'

services:
  app:
    build:
      context: .
      args:
        UID: ${UID-}
        GID: ${GID-}
    user: ${UID-0}:${GID-0}
    command: site-packages/bin/bottle.py --debug --reload --bind=0.0.0.0 --server=bjoern app
    env_file: .env.development
    ports:
      - 127.0.0.1:${APP_PORT-8080}:8080
    volumes:
      - .:/app

Dockerfile:

FROM python:3-slim
ARG UID
ARG GID
ENV PIP_TARGET site-packages
ENV PYTHONPATH site-packages

# libev-dev < bjoern
RUN apt-get update \
    && apt-get install -y build-essential libev-dev \
    && if [ "$UID" ] && [ "$GID" ]; then \
        if ! getent group "$GID"; then \
            groupadd -g "$GID" app; fi \
        && if ! getent passwd "$UID"; then \
            useradd -m -u "$UID" -g "$GID" -s /bin/bash app; fi; fi \
    && rm -rf /var/lib/apt/lists/*

WORKDIR /app

Here I create an app user, so that when the container creates a file, it wouldn’t end up being owned by root. Or else on the host you’ll need to chown to change it. For it to work you need to specify your UID and GID in the .env file (or in the environment).

Also you might want to specify APP_PORT in the .env file if you already have something running on port 8080.

.env.development:

PIP_TARGET=site-packages
PYTHONUNBUFFERED=1

I make it install packages into ./site-packages directory, so that one wouldn’t have to exec into the container to be able to access the files.

PYTHONUNBUFFERED=1 makes docker-compose log display the output immediately, not when the buffer fills. Ordinarily processes in a container run without a tty, so stdout is not line buffered.

For production:

docker-compose-production.yml:

version: "3"

services:
    app:
        build:
            context: .
            dockerfile: Dockerfile.production
        env_file: .env.production
        environment:
            VIRTUAL_HOST: $VIRTUAL_HOST  # for jwilder/nginx-proxy
            LETSENCRYPT_HOST: $VIRTUAL_HOST  # for jrcs/letsencrypt-nginx-proxy-companion
        expose:
            - 8080
        networks:
            - nginx-proxy
        restart: always

networks:
    nginx-proxy:
        external: true

Dockerfile.production:

FROM python:3.8-slim

# libev-dev < bjoern
RUN apt-get update \
    && apt-get install -y build-essential cmake libev-dev \
    && rm -rf /var/lib/apt/lists/*

WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt


FROM python:3.8-slim

# libev4 < bjoern
# libglib2.0-0 < opencv
RUN apt-get update \
    && apt-get install -y libev4 libglib2.0-0 \
    && rm -r /var/lib/apt/lists/*

WORKDIR /app
COPY . .
COPY --from=0 /usr/local/lib/python3.8/site-packages /usr/local/lib/python3.8/site-packages
COPY --from=0 /usr/local/bin /usr/local/bin
CMD ["bottle.py", "--bind=0.0.0.0", "--server=bjoern", "app"]

Depending on your needs you might not need a multi-stage build.

.env.production:

PYTHONUNBUFFERED=1

.dockerignore:

**/__pycache__

.git
site-packages

.dockerignore
.env
.gitignore
docker-compose-master.yml
docker-compose.yml
Dockerfile
Dockerfile.master
README.md

.gitignore:

__pycache__

/site-packages

/.env

And for completeness, let me describe the steps needed to be performed on the server for it to work:

  1. Install docker and nginx:

    # curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
    # echo deb https://download.docker.com/linux/debian $(lsb_release -cs) stable \
        > /etc/apt/sources.list.d/docker.list
    # apt update
    # apt install docker-ce nginx
    # systemctl enable --now docker
    
  2. Install docker-compose:

    # curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" \
        -o /usr/local/bin/docker-compose
    # chmod +x /usr/local/bin/docker-compose
    
  3. Start nginx-proxy and letsencrypt-nginx-proxy-container:

    nginx-proxy/docker-compose.yml:

    version: '3.5'
    
    services:
      nginx-proxy:
        image: jwilder/nginx-proxy:alpine
        networks:
          - nginx-proxy
        ports:
          - 8080:80
          - 4443:443
        volumes:
          - ./certs:/etc/nginx/certs
          - ./vhost.d:/etc/nginx/vhost.d
          - html:/usr/share/nginx/html
          - /var/run/docker.sock:/tmp/docker.sock:ro
        labels:
          com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: ''
        restart: always
    
      letsencrypt:
        image: jrcs/letsencrypt-nginx-proxy-companion
        environment:
          - "DEFAULT_EMAIL=EMAIL_ADDRESS_HERE"
        volumes:
          - ./certs:/etc/nginx/certs
          - ./vhost.d:/etc/nginx/vhost.d
          - html:/usr/share/nginx/html
          - /var/run/docker.sock:/var/run/docker.sock:ro
        depends_on:
          - nginx-proxy
        restart: always
    
    networks:
      nginx-proxy:
        name: nginx-proxy
    
    volumes:
      html:
    
    # docker-compose up -d
    

Then to add a site:

  1. Add an nginx virtual host:

    server {
        server_name  DOMAIN;
        access_log  /var/log/nginx/DOMAIN-access.log;
        error_log  /var/log/nginx/DOMAIN-error.log;
        location / {
            proxy_pass   http://127.0.0.1:8080;
            proxy_set_header  Host  $http_host;
            proxy_set_header  X-Forwarded-For  $proxy_add_x_forwarded_for;
        }
    }
    
    # server {
    #     server_name  DOMAIN;
    #     listen  443  ssl;
    #     ssl_certificate  /root/nginx-proxy/certs/DOMAIN.crt;
    #     ssl_certificate_key  /root/nginx-proxy/certs/DOMAIN.key;
    #     access_log  /var/log/nginx/DOMAIN-access.log;
    #     error_log  /var/log/nginx/DOMAIN-error.log;
    #     location / {
    #         proxy_pass   https://127.0.0.1:4443;
    #         proxy_set_header  Host  $http_host;
    #         proxy_set_header  X-Forwarded-For  $proxy_add_x_forwarded_for;
    #         proxy_set_header  X-Forwarded-Proto  https;
    #     }
    # }
    
    # systemctl reload nginx
    
  2. Create a site user:

    # useradd -ms /bin/bash -G docker DOMAIN
    
  3. You might want to generate an ssh keypair (deploy key) to be able to clone the repository e.g. from GitHub.

  4. You probably want to add your key to ~/.ssh/authorized_keys.

  5. After the first deploy letsencrypt-nginx-proxy-companion obtains the certificate and you can uncomment the https block in the virtual host’s config (and reload nginx).

That’s it, as simple as that :slight_smile:

I think someone may find useful these tutorials about Dockerizing and Deploying Django apps in AWS with an auto-scalable architecture in 2022: