How would you go about opening a socket (or websocket) to a service running outside of django channels.
Quick drawing:
client(browser) <---- ws ----> django/channels <----- socket -----> external/third party "service"
Where the third party could be considered as a “real-time” backend (based on asyncio.start_server
), pushing data to channels, which would bubble that data up to the client.
So far the following works well:
. client<->channels communication
. opening a socket
(via the streams
api) to the backend
. sending from client to channels
. having channels “forward” the client message to the backend
What does not work however is the StreamReader
“listening” in channels. This blocks.
import asyncio
from channels.generic.websocket import AsyncWebsocketConsumer
class CustomConsumer(AsyncWebsocketConsumer):
"""
extends AsyncWebsocketConsumer
"""
backend_writer = None
backend_reader = None
async def init_backend_connection(self):
self.backend_reader, self.backend_writer = await asyncio.open_connection('127.0.0.1', 8001)
print(1)
await self.backend_receive() # kaboom here?
print(2)
async def backend_receive(self):
while True:
try:
data = (await self.backend_reader.read(128)).decode()
print(f'message from backend: "{data}"')
except (
BrokenPipeError,
ConnectionRefusedError,
ConnectionResetError,
OSError,
asyncio.TimeoutError,
):
break
async def backend_send(self, message=None):
if message:
self.backend_writer.write(message)
await self.backend_writer.drain()
class Consumer(CustomConsumer):
async def connect(self):
await self.accept()
await self.init_backend_connection()
async def receive(self, text_data=None, bytes_data=None):
if bytes_data:
# forward to backend
await self.backend_send(message=bytes_data)
In init_backend_connection
, print(1)
is output. print(2)
is never output. So it is as if self.backend_reader
does not work. Does it block? If so why?
1 Like
If the connection to the back end is specifically associated with a connection, then I’d create an asyncio task within the consumer to handle the connection and let it fly. (Briefly, you don’t await
a task. You start it and let the event loop dispatch to it as necessary. (It’s then up to you to close that task when the client disconnects.)
You can use queues to communicate between those tasks.
If the backend connection is supposed to be used by multiple connections, then I’d set it up as a channels worker and have it run in a separate process. If you do it this way, you can then use the channel layer to communicate between the two processes. (This is what we do - that way, the consumers share a single connection to the back end, and the messages are used to manage the traffic routing. Also in this case, we have control over the external service, making that easier to do - YMMV.)
Do you have an async working version of this outside the channels context? That might make a easier base to start from.
1 Like
This is some fantastic insight. I’m trying to set up a solution similar to the second scenario you mentioned, where multiple client connections can rely on a single connection to the backend, but I’m having difficulty setting up and managing the socket client within my channels worker. Ideally I’d be able to open my backend websocket connection when the first frontend client opens a connection to Django, as well as bring down the backend websocket connection whenever the last frontend connection closes, but at this point I would also settle for just having a long-running backend connection eternally awaiting frontend traffic.
Do you have any recommendations on specific socket client libraries that work well within the channel worker context, and/or any advice for properly getting said socket client up and running reliably in my runworker
process? For context, the third-party service in my case in an external IRC socket server that I do not control, so it requires an auth handshake when initializing the connection.
I’m currently using the websockets library in an AsyncConsumer as a worker process. The different coroutines for websockets are written the same way as if you were writing the client as a stand-alone script, with the exception that you’re not starting the event loop yourself. (I have also used websocket-client in the past, but these two libraries are the extent of my experience with websockets in Python.)
Your events that your worker process receives from the channel layer could be used to manage the connections it makes to the external server. For example, you could set it up such that each browser that connects or disconnects could cause the consumer to send a message to your worker. The worker could then track which consumers (if any!) are connected and determine whether the external connection should be made or broken.
1 Like
Digging into that websockets library I definitely think I can leverage the client functionality within my channel worker, thanks so much for the link. I’m still grappling with incorporating the WebSocketClientProtocal
into my worker AsyncConsumer
. Essentially, I have my application layers set up like so:
Browser-based clients
^
|
ws
|
v
Django Channels AsyncConsumer
^
|
channel layer messages
|
v
channel layer worker AsyncConsumer
initiated via runworker
^
|
ws (websockets client)
|
v
third-party socket server
What I can’t seem to get functioning is opening/closing the websockets client connection within the channel worker process. Non frontend based websocket clients are very new to me so I’m wondering if my backend client should be a self-contained singleton class that is dependency injected into my worker AsyncConsumer
(to support the many-to-one browser-to-backend socket server relationship), or if there is some best practice unbeknownst to me for initializing/managing the socket communication between my background worker and the third-party socket server. Any/all guidance is greatly appreciated!
Keep in mind here that the channel layer in this structure completely separates and isolates the worker from the AsyncConsumer running from the main process. These are independant objects running in separate processes. There is no direct connection between those two entities. The only communication between those two components are through messages sent through the channel layer.
Handling the connection from the worker to the third-party socket server is handled normally as any other usage of the websockets library in an async program.
See Client - websockets 10.3 documentation
There is nothing in this part of the structure that is Channels-specific.
You might want to practice these techniques with a more limited domain. For example, you could get used to communicating between the main connections and the workers by creating a worker that receives a message and echoes it back to a group. (Kinda like the “chat” example, except making the chat process a separate worker.) Likewise, you could start with using a worker written to connect to the third party server and sending a test message when an external channel message is received - and then trigger that process manually to see it run.
I ended up getting this working just the way I wanted. Thanks so much @KenWhitesell for the channel layer tip, it’s a very nifty solution given the application requirements.
One last thing I’m curious about is whether you use Supervisor or some similar daemon to manage the worker process and ensure there’s minimal to no downtime. For something this long-running I figure there must be a mostly hands-off approach to maintaining a reliable worker process, and I have the most experience with supervisord, but you may have recommendations specific to this use case. Thanks again for providing some much needed direction.
I hate to say this, but…
It depends.
Ok, you deserve a better answer than that.
So the more detailed answer is that I use a mix. Right now, I’ve got two live Channels-based projects up and running. One uses Supervisord - I’m also very comfortable with it.
The other is built using Docker and a Docker-compose file.
My choice between them is generally based upon whether a non-root user needs to be able to manage the processes. (Very easy to do with supervisor, almost impossible to do with docker without some unintentional side-effect.)
1 Like
Makes sense. I haven’t 100% decided where/how I intend to deploy the app so this is great food for thought. Cheers!