we recently migrated from WSGI to ASGI, and things have gone quite smoothly. I also started implementing AsyncAPIViews (via ADRF) in places where we call external APIs. The idea being that using aiohttp would free up the event loop during those I/O waits.
This works well in practice, but lately I’ve been rethinking the actual benefits.
If I use a normal sync APIView with requests.get():
Django assigns the request to a thread.
When requests.get() is called, the GIL is released during I/O.
So other threads (handling other requests) can still make progress.
Now, if I use an async view with aiohttp:
Django (via ASGIHandler) still wraps the whole request in a ThreadSensitiveContext, which assigns a dedicated thread.
My async def view is run by the event loop, and the I/O call to aiohttp is non-blocking.
But since there’s still a thread reserved for that request context — it’s not saving me memory or threads.
So in both cases:
A thread is used.
The GIL is released during I/O.
The event loop remains unblocked.
There’s no obvious scalability advantage.
In fact, you could argue that the async view is worse:
It involves more thread/context switching (e.g. entering the thread for the ORM).
It still reserves a thread due to Django internals.
It doesn’t reduce memory usage or increase concurrency under load.
Even in views that don’t touch the database and only call an external API via aiohttp, a thread is still reserved, just as it would be in a sync view. Given that, wouldn’t the sync version be simpler and just as efficient (or even more)?
Am I missing something?
Is there any case where async views in Django provide a real-world scalability or performance win?
I would really appreciate some input and your thoughts. Thanks in advance!
Yes. The archtypal example is a situation where one request needs to call “n” external APIs.
Let’s say you’re running some type of “aggregator” service for hotel room availability. You might want to issue API requests to all of booking.com, expedia.com, hotels.com, etc.
Those calls (and the processing of their responses) could be handled in parallel by calling them and using a gather to wait for them all to complete before returning the response.
<opinion>
Yes. The benefits of going async tend to be overstated and address a relatively small percentage of deployment environments. </opinion>
Thank you for your other Link and explenation, very helpful. Also there you mention that it has an advantage only for applications having I/O operations at the same time.
I also can’t think about another case where it would improve performance? Is there another, which is not incredible niche?
In the most general case, you can consider your code to be in one of two states, “Processing” or “Waiting”. The “Waiting” state can be broadly divided into two sub-states - “Waiting for CPU” or “Waiting for IO”. It’s only the “Waiting for IO” state that exhibits a performance or scalability benefit from an async environment, and the degree of that benefit depends upon the relative amount of time spent in that state compared to the time required for the remainder of the process.
Again, keep in mind that all this is being written by someone who says -
and
This is another one of those situations that I’ve actually seen the mindset come full circle.
I’ve worked with “cooperative processing”-type systems before - going all the way back to having programmed for Windows 3.0, GEM, the original MacOS, Netware 2.2, DesqView, and probably one or two others that escape me at the moment.
It was a huge step forward when the 386 made true preemptive multitasking reasonable (not to forget the Atari 800 and MP/M, but those are special cases), and you stopped having to worry about yielding the CPU on a periodic basis - let the operating system do that.
In a lot of cases, this push to go back to a cooperative processing model feels like a huge step backward. Call it “misplaced or premature optimization”. The hardware that really demanded this is pretty much obsolete.
Again, I acknowledge situations where there’s definitely value to doing it. I just don’t see where they’re as common as what some people seem to believe.