Evaluation of performance of async views - they suck?

Hello fellow Djangonistas!

I have posted this also on /r/django but it was suggested to take it here. So here I am. :wink:

I am currently trying to evaluate the performance of async Django views in comparison to traditional sync Django views.

For this I created a simple project that has two views:

Both views make two simple queries to the database, one query fetching a list of items and the second query fetching the details of one item.

I have used asyncpg in the async view and have used psycopg2 in the sync view (without using the ORM of Django for better comparison, because there is async ORM yet).

Then I have run the project once in Gunicorn with the default sync workers to see how long the views take when running in a synchronous Gunicorn process.

I also have run the project in Gunicorn but using the Uvicorn async workers to see how long the views take when running a asynchronous Uvicorn process.

(You can also look at the server output when you click on the ā€œLogsā€ button in the widget on the lower right of the page. You need to be authenticated with your GitHub account to see the logs)

I then have made some load testing on my local machine:

This is the sync view running in out of the box Gunicorn:

This is the same for the async view running in Uvicorn workers in Gunicorn.

(Like the Uvicorn documentation tells me that Uvicorn should be run in production)

Both are running in a Docker container connecting to Postgres also in a Docker container on my AMD CPU with 8 cores.

So it seems that the async setup can not handle as much concurrent users as the sync setup. Why is this the case?

Should the async setup not be able to serve more concurrent users? Is Uvicorn a bad choice as async server for Django? Do I need to set something in Dockerfile so the container can use all CPU cores?

2 Likes

Hey, Iā€™ll throw a few random suggestions here:

  1. Check how much simultaneous connections your system can open. If youā€™re running Locust on Mac OS you might be capped by its file descriptor limit. Check the limit via ulimit -n. You can increase the limit via ulimit -n 1024.

  2. If you want to utilize your 8 cores then you want to have more than four workers set for gunicorn. Set the number to 14 - 16 and see how it goes.

2 Likes

I will offer the opinion that I donā€™t accept that premise as being universally true. The questions become one of identifying where the bottlenecks reside.

If your views are primarily CPU-constrained, Iā€™m not sure that going async is going to provide any benefit at all. (Keep in mind that quick queries in Postgres are likely going to be served from Postgresā€™ cache and not involve physical IO - and therefore themselves also be CPU constrained.)

If you wanted perhaps a more ā€œrealisticā€ demonstration of async views providing better throughput (rather than just greater scalability), you could have your view make 2-3 asynchronous http calls to a different server, running a view that returned some data after, perhaps with a 2-3 second time delay.

(Side note: If you havenā€™t seen Amber Brownā€™s keynote from DjangoCon US 2019, you might find it interesting.)

2 Likes

Thanks for your suggestions, my ulimit is already high (on linux) and also only having 4 workers should have a difference in an async environment. But I will try the load tests with more workers!

PS: sorry for the the deleted messages, I am just too stupid for discourse :wink:

Hi Ken!

Thanks for your comment, yes having 2-3 http requests in the view should have huge differences between async and sync processing. I just wanted to make the experiment like all the projects I developed in the last couple of years.

All the projects I worked on are basically just fetching data from a database and then rendering it on a website. So I wanted to see the effects on those kind of sites.

I think, async is really a kind a special thing that should not be applied just ā€œbecause it is thereā€ but only be used in views (or middlewares) that do network requests.

1 Like

Absolutely agree with this - if that includes making database queries as something doing network requests - assuming the database engine is on a different system than the one running your Django app, and we get to where we have a fully async database ORM.

Good points. Async ā‰  Super performance. Async is a mechanism that allows waiting for multiple i/o operations efficiently. So if your process isnā€™t I/O blocked but slow, async isnā€™t the solution for the problem.

1 Like

Yes.

But my thinking was, that db queries are I/O because they read something from a disk somewhere. But it seems that Postgres is dong a fantastic job in giving us the data very fast :slight_smile:

DB queries are I/O indeed, but Django ORM isnā€™t async yet, so all db queries are still sync.

DB querires arenā€™t all ā€œI/O indeedā€. With small enough tables and large enough cache, PostgreSQL will end up keeping everything memory resident. With any reasonable size memory allocation, tables like ContentType (used pretty much everywhere) will always remain in the buffers.

Iā€™ve got a project using a (roughly) 2 GB database. With 4 GB allocated to memory buffers, I/O ops / second trend toward zero very quickly.

Oh yeah, totally valid point! Thank you, Ken :wave: