Django ORM throughput inversely proportional to number of threads

I’m having a curious issue on Django 5.1.5, where the total throughput of my application is inversely proportional to the number of threads and total number of database connections.

I am using Postgresql with psycopg 3, but the issue seems unrelated to the database itself, since I am unable to reproduce it with plain psycopg.

Reproduced below. I just used a model I had around but any will do.

 from concurrent.futures import ThreadPoolExecutor
 import time
 import os
 
 import django.db
 
 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "emp_mng.settings")
 django.setup(set_prefix=False)
 
 from competences.models import Competence
 
 def task():
     t = time.time()
     for _ in Competence.objects.all():
         pass
     print("Elapsed:", time.time() - t)
 
 workers = 10
 with ThreadPoolExecutor(max_workers=workers) as pool:
     for _ in range(workers * 10):
         pool.submit(task)

With one worker:

 Elapsed: 0.03694295883178711
 Elapsed: 0.006497859954833984
 Elapsed: 0.004667043685913086
 Elapsed: 0.0046808719635009766
 Elapsed: 0.004235029220581055
 Elapsed: 0.004861354827880859
 Elapsed: 0.0044422149658203125
 Elapsed: 0.004687070846557617
 Elapsed: 0.004805088043212891
 Elapsed: 0.004675149917602539

With ten workers (last ten rows):

 Elapsed: 0.19048285484313965
 Elapsed: 0.17579221725463867
 Elapsed: 0.16677403450012207
 Elapsed: 0.1924729347229004
 Elapsed: 0.17237305641174316
 Elapsed: 0.147735595703125
 Elapsed: 0.15682673454284668
 Elapsed: 0.1484389305114746
 Elapsed: 0.1567528247833252
 Elapsed: 0.1298081874847412

Running with ten workers makes each query take at least 5x longer, or a lot more as it somehow also destroys the postgres cache. Any idea what could be happening?

perhaps you’re seeing this issue?

2 Likes

You are absolutely correct! I’m not sure why I failed to replicate this with psycopg earlier, but it does seem like introducing even a tiny amount of network delay causes the issue to vanish.

Thank you!