Are calls to the SAME view synchronous?

Hi all, I am trying to understand how django views work if there are multiple simultaneous calls.

The short story is that if I make two different views and call them (roughly) simultaneously, they seem to execute in parallel. But if I call the same view twice simultaneously, the calls seem to execute in series.

In more detail: if I create some silly toy views like the below:

def three_second_response(request):
  Wait three seconds, then send a message.
  return HttpResponse("You waited three seconds")

def four_second_response(request):
  Wait four seconds, then send a message.
  return HttpResponse("You waited four seconds")

and then I open both views at different URLs, they load in parallel - the 4-second view loads 1 second after the 3-second view; the total time is 4 seconds. However if I open the 3-second view two times simultaneously, the process takes a total of 6 seconds. It looks like the two calls to three_second_response() are executing synchronously.

So it looks like the default behaviour is async (which is great), but sync if it’s the same view?

I am puzzled by this behaviour and can’t find it documented - is this behaviour actually the case? If so, why is it this way? And does it have to be?

Thanks for any help!

Edit: a little more testing shows that if I put the three_second_response() view at two different URL endpoints, it CAN be called asynchronously via those two URLs - i.e. if there are two URL endpoints then there can be two (and only two) simultaneous executions. So maybe this is about routing? But I still don’t understand why it is so.

How were you running your test? The built-in development server, ./ runserver, can run with Python threads. I think this can make testing the scenarios you described complicated.

Django applications are synchronous because the underlying protocol, WSGI, is a synchronous protocol. The part that can change that story is the web application server that runs your application.

If, for instance, you use gunicorn to run your app, the number of concurrent requests that your application can handle is determined by the number of worker processes that you instruct gunicorn to use.

$ gunicorn project.wsgi --workers 2

This command would tell gunicorn to use 2 worker processes. Thus, the app server could process two requests in parallel. On a live site using a number of CPUs, you would likely bump up the number of workers spawned based on the number of CPUs available to you. I think the gunicorn documentation recommends a ratio of 2:1 workers to CPUs, but don’t quote me on that.

In your test, if you’re using runserver with threads, then the apparent concurrency of that server is determined by the thread handling within Python. The runserver command has a --nothreading flag that you can use. This will probably help prove to you that Django apps are synchronous and can only handle a single request at a time.

Note: In my whole commentary, I’m ignoring asynchronous views (i.e., views defined with async def) because that pattern deliberately opts into using Django in an async mode and that’s not really the scenario that new Django devs are typically in.

@mblayman I was just using runserver - no gunicorn or anything. So my expectation going in was that, without a bit more fiddling, everything would be synchronous, and I was pleasantly surprised to find that different views seemed to be asynchronous (wrt one another). So as you say, it looks like even running like this, there must be some threading enabled?

I actually did try defining my toy views with async just to see what would happen, and it seemed to make no difference whatsoever in this particular situation - two different URL endpoints still seemed to run simultaneously, the same URL endpoint called twice still seemed to run in series.

So the really surprising bit to me right now is this difference between calling two different views, and calling the same view twice. I don’t know if this is expected behaviour?

I found some interesting behavior here. (Tests run on an Ubuntu 20.04 image running in WSL 1 on Windows 10. Client were the local browsers on Windows 10)

Using Chrome, there appears to be an almost constant 3-4 second delay between the two tabs when accessing the same url. If I up the sleep to 30 seconds, the second tab does not take 60 seconds. The first tab completes in 30 seconds, the second completes in about 35 seconds.

If I open up the second tab in an Incognito window, both tabs complete in less than 31 seconds.

Using Firefox, there’s effectively no delay between the two tabs. A 30-second sleep completes in both tabs in less than 31 seconds.

This behavior does not change for either browser whether I’m using runserver or runserver_plus.

Therefore, I’ve come to the conclusion that if you’re using Chrome, that this is more likely a Chrome-related issue than it is a runserver issue.

@KenWhitesell thanks for this thought - I was on Chrome! That’s amazing, did not even occur to me. If I try on Firefox, I don’t see this behaviour any more either.

And thanks for discovering the constant delay - it had never occurred to me that the time delay I chose just so happened to correspond to a fixed delay, I assumed I was seeing synchronous behaviour.

I wonder if this is intentional behaviour from Chrome. For a standard HTTP view, it might be rare that a user would call the same view simultaneously from the same browser instance. But I am planning to use Django Rest Framework, and it seems plausible that an application might make the same API call more than once, so for that case this feels like a Chrome bug that could matter.

I didn’t make it clear in my toy example but I actually initially came across this behaviour while playing with DRF. I have a toy React front-end, and when I run that front-end on Chrome, it encounters this behaviour (pressing an app button twice in succession creates an extra delay). If I run this toy app on Firefox, everything works as expected.

Doing a little bit more digging, this is known Chrome behavior. The cache is shared among tabs, so multiple requests create contention - causing the delay. (No idea how Firefox handles this.)

Anyway, regardless of the source - or how you first encountered it, this was an interesting issue to discover and resolve. Thanks for raising it here!

1 Like