One might say I’m coming from the Ruby world, and there you have mina
and capistrano
. With them you basically do:
$ mina deploy
and by the time it finishes, the new release is running on the server. Of course you have a script where you describe what’s to be done on the server.
They both provide the following directory layout:
.
├── current -> releases/2
├── releases
│ ├── 1
│ │ └── public
│ │ └── uploads -> ../../../shared/public/uploads
│ └── 2
│ └── public
│ └── uploads -> ../../../shared/public/uploads
└── shared
└── public
└── uploads
The shared
directory stores non-code (user uploads, logs, node_modules, or something you want to share between releases (single copy)). With this, you can build (create) a new release on the server, then point current
to the new release, and restart the web server.
So I created a fabfile.py
:
from shlex import quote
from fabric.api import run, env, settings, cd, prefix
if not env.hosts:
env.hosts = ['site@example.com']
def deploy(branch='master'):
try:
with settings(warn_only=True):
deploy_lock_exists = run('test -e ~/deploy.lock')
if deploy_lock_exists.succeeded:
print('deploy.lock exists, exiting...')
return
run('touch ~/deploy.lock')
run('mkdir -p site/releases site/shared/{node_modules,logs}')
timestamp = run('date +%Y%m%d-%H%M%S')
release_dir = 'site/releases/' + quote(timestamp)
run('git clone -q -b ' + quote(branch)
+ ' git@github.com:x-yuri/site.git ' + quote(release_dir))
with cd(release_dir):
run('ln -s ../../shared/node_modules node_modules')
run('ln -s ../../shared/logs logs')
run('ln -s ../../../../shared/settings_local.py site/settings/local.py')
run('ln -s ../../../shared/secretkey.txt site/secretkey.txt')
run('ln -s ../../../shared/settings_secret.py site/settings_secret.py')
run('~/site/env/bin/pip install -qr requirements.txt')
run('yarn')
with prefix('. ~/site/env/bin/activate'):
with settings(warn_only=True):
secret_exists = run('test -e ~/site/shared/secretkey.txt')
if not secret_exists.succeeded:
run("python -c '"
"from django.core.management.utils import get_random_secret_key;"
"print(get_random_secret_key());"
"' > ~/site/shared/secretkey.txt")
run('./manage.sh migrate')
run('./node_modules/.bin/webpack')
run('./manage.sh collectstatic --no-input')
with cd('site/releases'):
run(r"ls | head -n -5 | xargs -rI{} -d'\n' rm -rf {}")
with cd('site'):
run('ln -nsf releases/"$(ls releases | tail -n 1)" current')
restart_uwsgi()
finally:
run('rm -f ~/deploy.lock')
def restart_uwsgi():
run('touch /etc/uwsgi.d/site.ini')
I could probably split it into more functions, but anyway comparing it to a similar mina
config, I have to care about relinking current
, linking shared directories and files, deploy locks, removing old releases. So I think I’m doing something wrong here. There must be a better way, and I hope you can point me in the right direction.
The information is pretty scarce on this. I probably saw a suggestion to build a wheel, and install it on the server. Let me be frank here, that seems weird. But anyways, how do I let it know where the data is? Environment variables? If possible, I’d like to have more specific instructions.
The other option I’m considering is using mina
+ docker
. The mina
’s config would basically be:
require 'mina/git'
require 'mina/deploy'
set :repository, 'git@github.com:x-yuri/site.git'
set :branch, ENV.fetch('BRANCH', 'master')
set :user, 'site'
set :domain, 'example.com'
set :deploy_to, '/home/site/app'
task :deploy do
deploy do
invoke :'git:clone'
command 'docker-compose -p "$USER" -f docker-compose-production.yml build'
invoke :'deploy:cleanup'
on :launch do
command 'docker-compose -p "$USER" -f docker-compose-production.yml up -d'
end
end
end
This basically tells mina
to deliver the source code to the server, and do docker-compose build && docker-compose up
. Plus I need Dockerfile
and docker-compose.yml
. That is probably the simplest way for me, but I’d like to know the options.