Understanding a dev/production channels/async setup with daphne and uvicorn

Hello,

Long time Django user but relatively short-time user of async tech here. I’m setting up an asynchronous app with websockets for the first time in a long time. I’d like to:

  1. Use daphne in development, for the great runserver integration.
  2. Use uvicorn in production, because I’m already using gunicorn.

Apart from a high-level question of “is this a sensible setup?” I’m also wondering what to do with Daphne in production. Does keeping the library around and the “daphne” app in INSTALLED_APPS cause any problems (performance or otherwise)? Should I go through the trouble of making sure it’s only installed and enabled in dev, or is it normal to leave it there, unused?

Edit: After reading a bit more I’m now wondering if it’s more simple to ditch daphne altogether and just run:

python -m uvicorn myproject.asgi:application --reload

In dev. So would also be curious to hear if that’s the recommended answer

I don’t see anything wrong with your approach.

I’m not sure what the effect would be having it in your installed apps during startup, but it is going to have a minimal effect while the system is up and running. (Searches for things like templates and static files would be looking through those package directories when it doesn’t need to. I’m guessing it’s possible that the effect would be measurable, but my gut tells me it’s not going to be noticeable.)

I use Daphne in both dev and production, but only for the websocket handling.

I use uwsgi for running the “Django” side of things. Everything “channels-related” is handled by Daphne. I then let nginx figure out where to direct the incoming requests.

I’m sure you can do the same with gunicorn / uvicorn or whatever wsgi / asgi containers you wish to use.

1 Like

Hey Cory.

Like Ken, I use Daphne in production quite happily, with nginx routing to gunicorn (with WSGI) for the majority of sync traffic.

If Daphne is installed but you’re not using it, your venv will be bigger, by one twisted, which may or may not affect you, but you’ll also incur a small import cost, as the twisted reactor will get some minimal (pre)initialisation. If I were not going to use Daphne I wouldn’t install it my production venv.

I wouldn’t myself use the uvicorn incantation you mention in development as it doesn’t have runserver’s niceties, such as staticfiles integration, but you can do that sure.

It’s a bit of a choose your flavour really.

HTH

1 Like

Thanks for the responses both! Running Daphne in combination with gunicorn sounds like a nice option for VPS-based deployments. As best I can tell that set up is trickier to replicate in a PaaS environment where you don’t have the same fine-grained control over url routing at the web server layer (though let me know if I’m wrong about that!).

For the time being I’ve pushed the daphne dependency into a dev-requirements file, and then added uvicorn as a production requirement. Then the only hack is to conditionally add daphne to INSTALLED_APPS based on the DEBUG setting.

if DEBUG:
    # in debug mode, add daphne to the beginning of INSTALLED_APPS to enable async support
    INSTALLED_APPS.insert(0, "daphne")

So far all seems to be working well!

FWIW I’ve also run Daphne in production for ASGI only services (for years) without problems.

1 Like

Sorry if this is a dense question, but what are the pros of setting up a hybrid asgi/wsgi set up, as opposed to asgi only in the context of a single site/project? Or is there any reading I can do on the subject?

For me, WSGI is utterly known. The scaling patterns, the errors, the edge cases. Everything.

ASGI is still young. We’re still learning. Django’s support is still evolving. Unknowns do come up.

For me, I’d still rather keep most work in the known sync WSGI environment, and leverage ASGI where I need it. (I like my sleep.)

Nonetheless I’ve run ASGI only services without big issues, so don’t be put off from doing so.

3 Likes

It’s actually not - far from it as a matter of fact.

In our case, it’s as much philosophical as anything else.

(I’m sure Carlton has a much better handle on the technical aspects of this - I’m one of those who assembles things with a sledgehammer. I just keep pounding until it does what I want.)

Some of the factors that have influenced our approach:

  • I’m an old sys-admin at heart. I fully embrace the “Unix philosophy” of “Do one thing and do it well.” In this case, serving web pages and managing websocket traffic are two separate and independent features of our application.

  • We’re very comfortable with nginx, and know how to make it do what we want. Nginx is the endpoint for all urls associated with a site. It’s nginx’s responsibility to serve static and media files, along with proxying http and ws to the internal handlers. Each one of those functions are handled by separate locations within the nginx configuration.

  • We believe (conjecture, untested) that a web site serving pages through uwsgi can handle significantly more users requesting pages than what an individual process running Daphne can handle servicing websockets. This means that we may want to be able to scale-out the Daphne components independently of the Django pieces. (In reality, we’re nowhere near the point of needing to do this - nor do we foresee that becoming necessary anytime in the near future.)

  • We prefer having the ability to manage those Daphne processes separately. We prefer being able to stop/update/start them without likewise affecting the Django pages. (Side note: We also run a number of worker processes that are managed separately as well. Those could also be deployed on separate systems.)

    • We have one project that is dockerized - we work on the principle that it’s “one process per container”, again, leading toward a separation of functionality
3 Likes

Just wanted to say thank you to both of you for the thoughtful answers on this. They have been super helpful!

1 Like

Hello and thanks for the info so far! In Production (Ubuntu + Nginx) I am using gunicorn for WSGI and daphne for ASGI.

Users ask the server via web socket to do some intense tasks (generate audio on the fly). I would like to have more than one instance/worker of daphne. What’s the best approach here? I am currently following this approach: Deploying — Channels 4.0.0 documentation

Thanks!

By “intense tasks”, do you mean CPU-bound operations? Are you looking at handling many of these concurrently from multiple users? How long does any one task typically last?

My suggestions are going to differ depending upon the answers to these, but in general, my recommendation is going to fall along the lines of getting these tasks running outside your Daphne instance - either as Channels external workers or Celery tasks.