Psycopg performance issue with Django 4.2

I’m upgrading an application from Django 3.2, Python 3.8, psycopg2 to Django 4.2, Python 3.11, psycopg (3.1.17). Server-side bindings are not enabled.

I have checked with debug toolbar and on my view does around 200 small SQL queries and the execution time went from 300ms to ~500ms (it’s hard to measure but psycopg2 is noticeably faster).

Unfortunately the project is pretty big (and private) so I’ve created a dummy project and in my limited testing that has shown a difference too (although closer to 20% and not 50%).

Basically I ran ab -n 100 http://localhost:54321/ a few times, then installed psycopg2, uninstalled psycopg and ran apache bench a few times again. These are the results

# Test 1
## with psycopg
Requests per second:    3.98 [#/sec] (mean)
Time per request:       251.289 [ms] (mean)

Requests per second:    3.77 [#/sec] (mean)
Time per request:       265.328 [ms] (mean)

Requests per second:    4.46 [#/sec] (mean)
Time per request:       223.977 [ms] (mean)

## with psycopg again
Requests per second:    4.47 [#/sec] (mean)
Time per request:       223.566 [ms] (mean)

Requests per second:    4.52 [#/sec] (mean)
Time per request:       221.223 [ms] (mean)

Requests per second:    4.32 [#/sec] (mean)
Time per request:       231.693 [ms] (mean)


## with psycopg2
Requests per second:    5.27 [#/sec] (mean)
Time per request:       189.893 [ms] (mean)

Requests per second:    4.83 [#/sec] (mean)
Time per request:       207.107 [ms] (mean)

Requests per second:    5.34 [#/sec] (mean)
Time per request:       187.244 [ms] (mean)

## with psycopg2 again

Requests per second:    5.01 [#/sec] (mean)
Time per request:       199.482 [ms] (mean)

Requests per second:    5.01 [#/sec] (mean)
Time per request:       199.477 [ms] (mean)

Requests per second:    5.01 [#/sec] (mean)
Time per request:       199.668 [ms] (mean)


# Test 2
# psycopg
Requests per second:    2.01 [#/sec] (mean)
Time per request:       496.954 [ms] (mean)

Requests per second:    2.02 [#/sec] (mean)
Time per request:       495.660 [ms] (mean)

# psycopg2
Requests per second:    2.42 [#/sec] (mean)
Time per request:       413.052 [ms] (mean)

Requests per second:    2.35 [#/sec] (mean)
Time per request:       426.163 [ms] (mean)

The tests were ran with basically these commands with around 200-500 books.

ab -n 100 http://localhost:54321/
docker-compose exec django bash
pip install psycopg2-binary
pip uninstall psycopg-binary psycopg
pip freeze
touch psycopg_performance/settings.py

The test project is available at GitHub - kviktor/psycopg_performance (I know a select_related would change the number of queries to be 1 but I suspect having a lot of queries is the cause of the issue).

Is this expected or maybe something is misconfigured?

1 Like

Not really an answers, but decided to stay with psycopg2 and will cross this bridge when Django enforces psycopg.

One minor improvement was using psycopg[binary] instead of psycopg, the latter is a Python implementation, but it was still slower than psycopg2.

Hi, have you figured out a solution for this? we have a similar problem and it seems related to django 4.2 rather than psycopg

We’re having a similar problem, seems more related to the django4 vs 3 ORM (or how django is using psycopg) rather than than psycopg2 vs 3

The psycopg>=3 implementation was added with close collaboration with the author of psycopg themselves so it’d be surprising if the slowdown were due to how it’s integrated in the framework. Performance was also an area that was tested against the full test suite at the time.

I’ll note that in the benchmark project provided by @kviktor has server-side bindings are enabled which are known to cause all sort of issues.

As for regressions in query construction we did address an aggregation regression a few weeks back but that should not have had an impact here.

I personally haven’t been able to reproduce significant slowdowns when comparing psycopg2-binary and psycopg[binary]>=3 with the test project when turning server side bindings off.

If someone could provide simplified benchmarks (e.g. a management command or test suite) and include reports such as flamegraphs it would be greatly useful in investigating this issue further.

1 Like