Strange behaviour for Django > 5.0 (long loading times and high Postgres CPU load, only admin)

Hi everyone,

I’m currently experiencing some strange behavior that I can’t quite wrap my head around, so I thought I’d ask if anyone here has seen something similar.

What happened:
I recently upgraded one of our larger projects from Django 4.2 (Python 3.11) to Django 5.2 (Python 3.13). The upgrade itself went smoothly with no obvious issues. However, I quickly noticed that our admin pages have become painfully slow. We’re seeing a jump from millisecond-level response times to several seconds.

For example, the default /admin page used to load in around 200–300ms before the upgrade, but now it’s taking 3–4 seconds.

I initially didn’t notice this during development (more on that in a moment), but a colleague brought it to my attention shortly after the deployment to production. Unfortunately, I didn’t have time to investigate right away, but I finally got around to digging into it yesterday.

What I found:
Our PostgreSQL 14 database server spikes to 100% CPU usage when accessing the admin pages. Interestingly, our regular Django frontend and DRF API endpoints seem unaffected — or at least not to the same extent.

I also upgraded psycopg as part of the process, but I haven’t found anything suspicious there yet.

Why I missed it locally:
On my local development environment, we’re running the app using the Daphne ASGI server.
In production, we route traffic differently: WebSockets go through Daphne, while regular HTTP traffic is handled by Gunicorn in classic WSGI mode.

Out of curiosity, I temporarily switched the production setup to serve HTTP traffic via Daphne/ASGI instead of Gunicorn/WSGI — and, like magic, everything went back to normal: no more lag, no more CPU spikes.

So… what the heck is going on here?
What could possibly cause this kind of behavior? Has anyone experienced something similar or have any ideas on where I should look next? Ideally, I’d like to get back to our Gunicorn/WSGI setup, but not while it’s behaving like this.

Thanks in advance for any hints or suggestions!

I found out why the perfomance tanked so much.

It is caused by the sentry-sdk. I don´t know why the sdk has such a big impact in django above version 5 but i will open an issue with the sentry team to look further into this.

Thanks for the update, that’s helpful to know. We’ve been planning a similar upgrade and would’ve missed the Sentry SDK angle entirely. Curious to see what the Sentry team finds, definitely keeping an eye on this.

Hello there.
Can you share which sentry-sdk version is the problem related to, or link the github issue here?
I have a pending dependabot PR for sentry-sdk and remembered this post from a while ago.
Thanks in advance.

Hi,

the problem is not related to a specific sentry-sdk version much more the django version > 5.0.
We have identified that the problem lays within the evaluation of the django_admin_log queryset which is passed with every admin call and always beeing evaluated by sentry because of a repr() call.

Currently there is no fix for this, because the guy from sentry is on summer vacation but hopefuly will be back in a few days :slight_smile:

I think that not many django users will encounter this problem because it scales up with the number of LogEntries inside the django_admin_log table. At rufly 10 mil it will start to be really noticeable…

This is the sentry-sdk issue which they are working on:

1 Like

I had the same issue after upgrading from Django 4.2 to 5.2. With a large django_admin_log table having 35+ million rows.

What I found was that the requests were hanging on the same django_admin_log query with ORDER BY "django_admin_log"."action_time" DESC LIMIT 21

Looking at the django_admin_log table, action_time should have an index. And the Model definition doesn’t have a db_index set even though it’s the default order.

class LogEntry(models.Model):
    action_time = models.DateTimeField(
        _("action time"),
        default=timezone.now,
        editable=False,
    )

...

    class Meta:
...
        ordering = ["-action_time"]

By adding an index to action_time has resolved the problem.

CREATE INDEX CONCURRENTLY django_admin_log_action_time_idx ON django_admin_log (action_time DESC);
1 Like

Is this the same problem as I talk about here: Safe repr for error pages ?

Seems at least similar…

This is a good catch, i will experiment with this just for performance reasons. The actual problem with sentry is now fixed upstream in the sentry-sdk by the folks of sentry.

Just as an update to you guys, the folks from sentry found the problematic part and fixed it upstream, should be released in the next release.