Django Channels - AsyncHttpConsumer I set Channel, No Channels Specified

I’m trying to implement django_eventstream and django_channels together.
The code is all working except for the channel not being specified and I don’t understand why because I set the channel in


data: {"condition": "bad-request", "text": "Invalid request: No channels specified."}

application = ProtocolTypeRouter({
        # "http": get_asgi_application(),
        re_path(r'', get_asgi_application()),

sse_urlpatterns = [

class chat(AsyncHttpConsumer):
    async def handle(self, body):
            self.group_name = self.scope['url_route']['kwargs']['chat_name']
            self.channel_name = self.scope['url_route']['kwargs']['chat_name']

            await self.channel_layer.group_add(

You’re trying to redefined self.channel_name. It’s actually defined by channels itself when the connection is established - you shouldn’t ever change it unless you really know what you’re doing.

Hey Ken, thanks for your reply!
From what you’re telling me, it seems impossible to use django eventstream to send to individual chat rooms.
In the eventstream documentation their documentation says that the first arg is channel name. but it seems only 1 channel is opened at a time after reading the channels documentation.

send_event('test', 'message', {'text': 'hello world'})

I’m trying to create a simple kitchen app that instantly receives updates when receiving an order.
I was debating whether to use websockets or SSE for this, and after a bunch of research SSE seemed like the clear winner, but now i’m not sure if it is even possible with eventsream.

I’m not following what you’re saying here. (Admittedly, I know nothing about django-eventstream)

In channels, you send a message to a channel.

If that channel is the channel for an individual user, the message goes to that user.

If you use group_send on a channel that represents a group, the message is sent to every member of that group.

But a channel itself isn’t specifically a “group” channel or an “individual” channel. A channel is handled by some process that is listening on that channel and waiting to receive messages. What happens to a message is solely determined by the process listening on that channel.


So if I understand you correctly, I could send a message to that channel, and then distribute that message with to certain chatroom after receiving it?

I’m trying to create a real time app, for restaurants where the kitchen logs into a page, and it receives all the orders in real time. I’m thinking of using just websockets from django-channels, can a connection live for like 16 hours at a time? or 24 hours?

Yes. How you do that depends upon what component the “I” is that you are referring to at that point. (The browser? Django? An external process?) If you are talking about the browser sending the message, that concept is exactly what the “Chat room” examples do.

“Can” a connection live for that long? Easily. Much longer, too, if the network and components are reliable. However, you’d still want to use a reconnecting client in the browser that would detect when a connection drops and reconnect automatically.

However, if this is the only requirement you have for channels, I’d question whether or not it’s truly necessary. Is it truly valuable that they get notified within 1 second that an order has come in? Or would 5 seconds be good enough? 10? 30?

My first inclination for this type of requirement would be to poll the server at 5 or 10 second intervals. I’d have a hard time justifying the additional work and infrastructure effort necessary to deploy that type of application to save (on average) 2.5 seconds on an incoming order.

After doing some research I think Long Polling is probably the answer I’ve been looking for, Thanks for your recommendation Ken!

I assumed that Long Polling would take up a lot of resources for some reason, but it seems like it should take less than Websockets. and it seems more scalable. Is that correct?

I’m expecting a flood of orders at lunch and that’s about it.

Do you have any recommendations on what to use?
I’m reading a lot of people use Node.js but I don’t think that’s viable for Django.

Personally, I think short-polling would be even better - there’s no need to hold that connection open for 5 seconds at a time.

Long polling holds an open connection, short polling checks for an update and returns.

Actually, it’s an apples & oranges comparison. It’s different resources being used in each case, making this not directly comparable.

Neither. None of the above.

I’d have a page that has some JavaScript in it that sends a request every 5 seconds looking for any updated information that needs to be displayed.

This is easily done with base Django and no need for additional packages.

I was originally thinking of doing something similar to that with javascript, but I thought if there was 1,000 or 10,000 (it’s just a pipe dream for now, but a pretty reasonable number) kitchens sending a request every 5 seconds, that feels like it would bog down the server. I also expect kitchens to open this webpage and just leave it open forever pretty much.

The thing is, I really have no idea how many resources an HTTP connection takes or what it can handle or what amount of cpu/ram i would need to reasonably handle X amount of users, is there something you would suggest reading to get a better understanding of this?

I was under the impression that pushing is always lighter than pulling.

It’s not solely a question of “pull vs push”. It’s also a question of how many TCP connections your server can effectively handle. Maintaining a connection is more resource intensive on the web server than opening and closing a connection.
Just some quick back-of-the-envelope calculations - 1,000 servers would be 1,000 persistent connections if they all opened websockets. However, on average, there would only be a few more than 200 connections open at any one time in a short-polling situation. And, if you needed to ease things up during peak load times, you could always extend that window to reduce the immediate load.

If you ever get to that point, then you’re either looking at multiple servers or will have the time to upgrade and change your architecture.

However, it’s a lot easier to scale for intermittent connections than it is for persistent connections.

Unfortunately, there’s no fixed or firm answer for this. It can only be determined empirically in the context of your actual architecture. There is no reliable mechanism to determine those requirements in the abstract. And, even that can be variable based upon the configuration of various components within your system.
(One specific example is that we had a Django ORM query that took 40 seconds to execute. By making some configuration changes to PostgreSQL, the execution time dropped to about 7 seconds. If you’re interested in reading more about that, see In Pursuit of PostgreSQL Performance - DEV Community)

Hey Ken,

So I implemented the Short Call Method you suggested, it’s working great! I was bashing my head against the wall trying to figure out that SSE but this is going to get me into production so much faster!
Sorry for so many questions, but you are so knowledgeable! I also have three extra questions lol.
First I just have a simple AJAX call to the server set at an interval and it just calls infinitely, do you know of a smarter way to do this with less tax on server?
Secondly, do you know what this strange [[Prototype Object]] is that is returning with my JSON call? i can’t seem to get rid of it. Third you’ve been so quick to answer all my questions with such great responses, it’s really saved me an immense amount of time and I am very grateful. Do you have anything that a junior dev like me could help you with?

  setInterval(const getOrdersUpdates = function(){
      url: url,
      data: {
        'csrfmiddlewaretoken': $('input[name=csrfmiddlewaretoken]').val(),
       //bunch of append logic here
      error: function(response){
        console.log('error', response);
  }, 10000)

Nope, that’s what short polling does. Unless you’re in a situation where it’s “pay per web request”, it’s really not that heavy a load on the server.

That’s not actually getting returned by the server. The server is (technically) returning a string. That string is a textual representation of a JSON object.
(Briefly, and not 100% precisely correct) When JavaScript creates an object from that string, it uses a “prototype object” as the “object template” for the instance of that object created from the string. Those template attributes are what you’re seeing here.

It’s analogous to Python’s class definition. When you create a class from a parent class, your child class has all its attributes in addition to all the attributes defined by the parent. You can think about this the same way. (With the caveat that I’m simplifying things a bit - there are some differences, but I believe the description is good enough for casual conversation. A JavaScript purist would cringe at what I’ve written here.)

A kind offer, but I’m good thanks. I’m just a semi-retired old fart that enjoys trying to help the next generation of programmers along.

1 Like

Well you’ve definitely helped me, thanks a bunch.