I would like to run Django and django-q in a container.
Since I am lazy I would like to use only one container.
I think will use honcho to start gunicorn and the django-q workers.
Is this a feasible solution, or am I on the wrong path?
I would like to run Django and django-q in a container.
Since I am lazy I would like to use only one container.
I think will use honcho to start gunicorn and the django-q workers.
Is this a feasible solution, or am I on the wrong path?
I am running django-q in a container using a nohup:
In my Dockerfile
CMD /bin/sh -c ${APP_ROOT}/start.sh`
In start.sh
echo ‘Starting DjangoQ’
nohup python manage.py qcluster | tee /dev/stdout &
echo ‘Starting Gunicorn’
exec gunicorn django.wsgi:application -w 5 -b :8000 --capture-output --log-level=info --reload
This approach is pretty lightweight - if the process fails you have to restart it automatically.
Everything I’ve read says that you should have one process per container.
We create separate containers for our uwsgi instance and our Celery worker instances. We also have separate containers for our Redis, nginx, and Celery beats instances. It has helped us create more containers that are “reusable components” across different projects.
Maybe you are right and it is better to use two containers. One for gunicorn and one for django-q.
if the process fails you have to restart it automatically.
this sounds like manual work, which I would like to avoid. Maybe it is better to use two containers.
This would indeed be the cleanest solution. You could use honcho or supervisord etc but as Ken said this would just create some kind of super container.
The lazy solution worked so far for me. The process never failed thus far.
If you have health checks for your containers and you are able to restart them on failure automatically you should build your setup around these checks.
But if you have to do it manually anyhow…