I am developing a Django web application that uses RQ to do long-running jobs. It works well. I now need a way to notify the front-end client when a job is completed. I read that the standard solution to this problem is web sockets.
I understand the basic idea behind web sockets, but I am having trouble conceptualizing how the different components of my system will fit together. Here is what I have so far:
Frontend client establishes web socket connection with Django backend instance.
Frontend client sends regular POST request to Django to queue up a job. Django puts the job into Redis, and returns job id to the frontend client.
RQ worker picks up job from the queue, and works on it. Once the job is done, RQ marks the task as complete in Redis.
Somehow, Django instance recognizes that the job is complete.
Once Django knows that the job is complete, it sends the job status to the frontend via the web socket.
My knowledge gap lies in step #4.
I don’t understand how the backend Django instance can be informed that the job is complete, without me having to trigger the status check manually (for example, with a GET /task/<task_id> request from the client).
I note that in RQ there is an option to execute callbacks on job completion. I assume these callbacks are executed by the worker node. Is it possible to hand the worker node an active web socket connection in one of these callbacks?
How do developers usually solve the problem described in step #4?
Thank you. I googled this problem for a couple of hours, but wasn’t able to find a detailed answer beyond “use websockets/django-channels”.
I finally got around to implementing this at work, and I want to follow up.
@KenWhitesell basically gave me the answer. The RQ task can submit messages to the Channels Layer directly, and the Channels consumers can forward the contents of the messages to the web sockets.
To those who stumble on this looking for information, here is my final architecture:
This shows 3 instances of Django that communicate with each other using Redis:
The WSGI app instance is responsible for accepting job requests over plain HTTP, and for submitting them to a queue in Redis.
RQ worker instance is responsible for consuming jobs from the queue, and for broadcasting job status into the Channels Layer.
The Channels Layer is basically a message bus that the ASGI/Channels app is listening to. (Luckily, it’s also enabled by Redis, so one fewer service/container to set up.)
A clean way to submit messages to the Channels Layer is using job callbacks. You implement the channels broadcast inside the job callback.
The channels/ASGI app instance is responsible for maintaining a web socket connection to the front end, and for monitoring the Redis message bus
Once it encounters a relevant message on the bus, it can forward the contents of the the message to the web socket connection.
Voila!
As a side note, both WSGI and ASGI apps can be served using a single ASGI server (daphne or uvicorn), which simplifies my chart quite a bit, but I chose to keep ASGI and WSGI apps separate. Each of them is running in its own containers, so they are easy to scale independently.