It’s a really funny thread 
Questions: Does Django provide an AsyncTestCase class?
No. It doesn’t.
How can you write async tests in Django right now?
For now, the best way to test Django code is still django.test.TestCase. Under the hood, Django wraps async test methods with async_to_sync, and it kind of works.
Django relies heavily on this approach during tests, especially for:
Because of this, you can’t just use IsolatedAsyncioTestCase out of the box without a lot of extra work (You can estimate the amount of work just by looking at that class: every async_to_sync call would need to be rewritten to be fully async. That’s a huge effort, and it doesn’t really make sense, because the ORM is still synchronous and runs in threads under the hood.)
So the current practical setup is:
-
Use IsolatedAsyncioTestCase to test pure async code that talks to Redis, Elasticsearch, etc. (async client).
-
Use django.test.TestCase for views and anything involving the ORM.
It usually looks like this:
from django.test import TestCase
class Test(TestCase):
@classmethod
def setUpTestData(cls):
...
async def test_foo(self):
data = foo()
assert data == "bar"
This isn’t about test performance
, but it covers about 99% of real needs.
BTW: TestCase creates a new event loop for each test, so you must recreate global async clients (redis, es, etc.) for every test, because they’re bound to a specific event loop.
About performance and async in Django
Regarding performance
I don’t fully agree with all of Ken’s points, but he does make a valid argument about Django’s design. If you need to handle 10k RPCs concurrently and you’re looking for an async framework to benefit from parallelism, Django is not the right choice. Not yet.
The heavy use of async_to_sync introduces a significant performance penalty. For our asynchronous API, we’re handling about 700 rps, and we can’t push more traffic because of performance issues. To understand how bad the situation is, I switched the API from ASGI to WSGI and the 99th percentile latency improved significantly.
We rewrote most of the middleware to be fully async (we can’t rewrite all the default middleware yet), and the situation improved a bit. However, we still end up with one thread per request, which we can’t avoid because of the ORM. This severely hurts performance and increases CPU usage.
I don’t share the same optimism about async as you do. Async applications are also hard to debug, difficult to profile, and challenging to maintain, especially in large, high traffic projects.
Sometimes async is worth it, but without an async ORM, it’s not really about Django.
I would have no complaint with there being an “async Django” - having all the features of current Django except built with an async core - with the understanding that it would be a separate code base that does not affect the performance of “sync Django”.
@KenWhitesell Thanks for saying that
100% true
BTW if you’re interested in an async ORM, you can check out (or join) this project:
https://github.com/Arfey/django-async-backend