Using StreamingHttpResponse over HTTPS

Hi, I’m wondering if anyone can help me with a problem I’ve been stuck on for weeks.

I’m building a Django API (using the Django Rest Framework) that streams a response from the OpenAI API and I’m having trouble getting it to work in production. I want it to start streaming the second it gets the first chunk.

Right now, everything works as expected in localhost, as well as an AWS EC2 instance (in Ubuntu) over HTTP. However, when we move the API to HTTPS, it waits until the response is fully generated to return.

Here is the code for the view responsible (assume generate_thing returns a streaming response from OpenAI):

def stream(response):
    for chunk in response:
        text = chunk["choices"][0]["text"]
        yield text

def example(request):
    data = json.loads(request.body.decode('utf-8'))
    response = generate_thing(data)
    return StreamingHttpResponse(stream(response), content_type='text/event-stream')

I also explored using Django Channels to create a WebSocket at the recommendation of someone from StackOverflow. While that worked, it can only handle one request at a time, which is obviously not ideal for a production-grade app.

Here’s the code from that attempt:

class ExampleConsumer(WebsocketConsumer):
    def connect(self):

    def stream(self, response):
        for chunk in response:
            text = chunk["choices"][0]["text"]

    def receive(self, text_data):
        data = json.loads(text_data)
        response = generate_thing(data)

Keep in mind that I’ve been using Django for only a month so I’m not too familiar with the ins and outs and I’m learning as I’m building. Any insight would be extremely helpful.

In general, that is not an accurate statement - certainly not globally or universally true.

It should not limit multiple people from connecting concurrently and issuing requests. It may help if you provided more details around your environment and the client code that you are using for this websocket.

If you’re talking about issuing multiple requests concurrently through the same websocket, then you have different issues to address, like keeping the data segregated between those requests.

This is what the connection to a Websocket looks like on the client side of my code currently:

const connection = new WebSocket('path to API');
connection.addEventListener('message', e => addData(
connection.onclose = () => this.isLoading = false;

For Websocket protocol setup on Django, I’m using Channels and Daphne. I mainly followed the docs for the setup:


ASGI_APPLICATION = "my_app.asgi.application"


from channels.routing import ProtocolTypeRouter, URLRouter
from channels.auth import AuthMiddlewareStack
from import AllowedHostsOriginValidator
import my_app.routing as routes

django_asgi_app = get_asgi_application()

application = ProtocolTypeRouter({
    "http": django_asgi_app,
    "websocket": AllowedHostsOriginValidator(

And yes, ideally this websocket would be able to handle multiple concurrent requests. Is that possible?
I’m also new to Websockets lol

Yes, it’s quite possible. However, it’s going to be up to you to multiplex the requests and responses. There’s nothing within a websocket frame itself to identify which request some data is being responded to, and so that means it would be up to you to define and implement an internal protocol within that websocket connection.
For example, in one of the systems I work on, every websocket frame is a JSON object with at least two keys, “app” and “data”. Each app can then define requirements for additional keys, such as “req” for a request number and “seq” for a sequence number.
Our different worker modules in Channels correspond to the “app” key, and generates the response to be returned to the browser through the consumer, populating the objects as appropriate.

On the server side of this, you’d probably want to either implement an async consumer or else off-load the generation of this data stream to a separate async worker process.

Before doing all this, you might also want to verify that these requests are sufficiently “parallelizable” to make it worthwhile to convert them to async requests. If they’re heavy CPU bound, you’re not likely going to see any benefit unless you offload those requests to separate systems. Otherwise, you may be better off just queuing the requests and handling them sequentially.

Hi Adam,

I’m running into the same issue where I can’t get a streaming response when deploying to a production environment.

Running locally works all fine, I get my streamed data as expected. But as soon as I move to a production env. all data gets buffered before sending out. Seems exactly like your issue.

Did you manage to solve the issue or figure out what was happening?

Kind regards,