Performance Issue with Database Write Operations in Django

I have an API endpoint that handles around 4000 requests per second. The original implementation simply retrieves the last Url object from the database and returns some data, which works efficiently. Here’s the code:

def get(self, request):
    urls = Url.objects.all().last()
    
    if urls:
        data = {"name": urls.name, "url": urls.url}
        return Response(data, status=200)
    return Response({"name": "not found", "url": "not found"}, status=200)

However, when I add a field update to this endpoint, the number of requests the server can handle drops significantly, from 4000 to around 700 per second. Here’s the modified code:

def get(self, request):
    urls = Url.objects.all().last()
    
    if urls:
        urls.remark = "test remark"
        urls.save()
        
        data = {"name": urls.name, "url": urls.url}
        return Response(data, status=200)
    return Response({"name": "not found", "url": "not found"}, status=200)

  • Why does adding a simple field update (urls.remark = "test remark" and urls.save()) cause such a significant drop in performance?
  • What are the best practices to handle high-frequency write operations in Django without severely impacting performance?
  • Are there any specific configurations or optimizations I should consider for the database or Django ORM to improve this?

Why does adding a simple field update (urls.remark = "test remark" and urls.save()) cause such a significant drop in performance?

A good starting assumption when programming web applications is “the bottleneck is in the database”. Adding a save call is making an update sql call to the database. In order to understand your bottlenecks, you should familiarize yourself with profiling tools. I’d recommend pyinstrument and django debug toolbar (or cProfile if you’re limited to the stdlib) for local development. In production you should eventually using some tracing such as open telemetry. Once your application has profiling, you can start to investigate the performance.

What are the best practices to handle high-frequency write operations in Django without severely impacting performance?

One thing you can do is ensure you’re writing efficient sql queries. As an example, your update might be more efficient by specifying the update_fields argument.

Are there any specific configurations or optimizations I should consider for the database or Django ORM to improve this?

Have you seen the docs on database optimization?

1 Like

Can you share which database you are using? In my experience all databases respond slower for writes than reads, but the drop in performance shouldn’t be as drastic as what you are describing.

As @massover mentioned it is more likely that the bottle neck is the database, so make sure your db is configured correctly (e.g. enable WAL journal_mode if you are using SQLite).

Im using Postgres

Here is my config in settings.py

DATABASES = {
    "default": {
        "ENGINE": "django.db.backends.postgresql",
        "NAME": "testname",
        "USER": "testuser",
        "PASSWORD": "123",
        "HOST": "ipaddress",
        "PORT": "5432",
        "CONN_MAX_AGE":60,
    }
}

did I miss anything ?