Actually, as written, this isn’t exactly correct.
At the most fundamental level, ASGI runs everything in one thread. It relies upon the code being run to not tie up that thread any more than absolutely necessary.
So no, you won’t have better throughput overall until you reach the point where you’re handling enough requests that can be “event driven” instead of “process driven”. Handling 4 concurrent requests in 4 separate processes is going to take less time than handling those same 4 concurrent requests in a single thread if those requests are more CPU-bound than IO-bound.
You’ll only see a benefit if your views are spending enough time waiting for data rather than processing the data.
It will run those views in a separate thread from the thread running the event loop. (See Asynchronous support | Django documentation | Django)
This is not correct. A single-thread multi-process runtime is going to be faster for a single request than the overhead of an event loop. When you’re writing async code, say an await statement, you’re releasing control of the CPU back to the event handler. This creates overhead compared to the case where you’re keeping control of the CPU, possibly doing a “busy-wait”, while you’re waiting for your IO request to complete.
That really does depend upon your application. There’s no doubt that there are a set of applications, conditions, and circumstances that are greatly enhanced by an async approach.
However, I maintain that those situations do not exist for 99+% of all Django development being done. No, I don’t have any proof of that. I’m not aware of any survey or research to prove or disprove it. But the number of sites that deal with [“Facebook”, “X”, “Instagram”, “Amazon”, “Google”]-levels of traffic are exceedingly small. The number of sites that may aggregate data from multiple sources where parallel web requests or database queries would provide a tangible benefit is likely to be a much larger number - but I think that is still relatively small.
I think your first “test” is to evaluate your code and your project along with your (realistic) projections of the level of activity your project will see.
Again, async does not improve the throughput of CPU-bound tasks. It increases throughput in those areas where the tasks are otherwise waiting for IO to complete by allowing the CPU to work on other things.