DRF: View response 300-600ms, time to optimize?

Hello everyone,

just looked at Sentry’s profiling of my API and noticed that frequently called endpoint that does 4 queries has total duration anywhere from 300ms to 600ms. Although when I view the recent ones I see most being at least above 400ms. Below is chart from Sentry:

Seems a bit high to me but I don’t have a good grasp if this is okay or not. I wouldn’t want to do premature optimization.

Looking at the spans, seems like the majority of time is almost equally split between the database and the view.render call.

Should I optimize this? If so, is caching the good start? I would need to vary this on query string and also headers (because I utilize client version)

Thanks!

What’s your target response range? If it’s simply to be “as fast as possible”, you can go way down a very deep rabbit hole in that direction.

Have you determined what this difference makes in terms of end-user response? In other words, if the difference between a 300 ms and 600 ms response is the user-visible difference between 1 second and 1.3 seconds, then the majority of the time is actually being spent in the browser, and you may want to look at other opportunities as well.

Are you using any form of “external-to-Django” connection pooling? (e.g. pgpool-II / pgbouncer)

Have you checked the performance of the queries themselves?

Are you rendering html? If so, have you looked at the template caches? (see Performance and optimization | Django documentation | Django)

It may also be worth looking at what you’re passing to the template rendering engine in the context, and if this is causing work to be performed by render, when it may be more effective to do that work in the view.

Are you sending pages (or page-fragments) out using any form of excess-space removal? gzip?

For the most part, those have been the “low-hanging fruit” for us - even more so than caching specific query results. But then, we’re happy with any “time to response” less than a second.

1 Like

Thanks, dunno about the “target” response range.

I checked one other endpoint that is simpler and its responses are around 120ms.

This API I am talking about is used in iOS and Android mobile apps - I should have specified this in the first post. So it is send as JSON and parsed in the app.

Since the server is in Germany and I have a lot of users in the US and Asia I think improving the response rate could perhaps be more noticeable to those users.

I am not using any pooling or similar stuff. I read a bit yesterday (Improve Serialization Performance in Django Rest Framework | Haki Benita) and so far tried just the read_only_fields tip on some serializers - but I dont have data yet to know how much this helped.

As for the queries - the ones for this endpoint are based on my previous post (How to order_by ForeignKey that matches query?) so there is lot of subqueries going on.

Another possibility is to create a denormalized “response table”, where you’re effectively “pre-cacheing” your query results into a different table that is closer to what you want to return from the query.

The value of doing something like this depends greatly upon how frequently the data is updated and how much data is received for these updates, along with the quantity and frequency of these queries being executed, but there are some cases where it can be an extremely useful technique.

The database connection pooling is a completely separate topic. How much it’s going to help is going to depend only upon how frequently your connections are recycled. You might want to just try it out to see what the effect on response time is with it.

You might want to identify what the latency is in those cases as well. You may also need to factor in the bandwidth limitations of their mobile data plan. (Sending condensed or compressed data could really help there.)

There’s not likely going to be one solution here. I think you’re probably going to end up wanting to save a few milliseconds here and there to where the total savings comes out to be significant. There are a lot of pieces in this puzzle.

1 Like

First time I am hearing about “response table” sounds like an interesting approach but I am worried that with my limited experience I could easily screw something up. The data changes relatively frequently so at maximum I would prefer the user to get data not older than 30 minutes.

Since I have basic Django cache already set up, I am thinking I would try integrating it into the get method of this slower view? Is there anything I should look out for other than correctly constructing the cache key based on the request parameters?

I think I should also set up response monitoring on the client side so I know how much time it really takes

That’s always a good idea. It really does help to have an accurate understanding of where the time is being spent.

Depending upon how frequently the data is updated, you might want to add something to invalidate the cache when new data comes in, as opposed to waiting for some fixed period of time. (It’s hard to judge things like this without knowing the specifics of the data flows.)

If the individual queries are taking well less than 300 ms, you could probably update whatever resource you’re using every minute or less.

Using redis, memcached, or database? That may also affect the decision-making process.

Hello,

We find GitHub - jazzband/django-silk: Silky smooth profiling for Django really useful for tracking down performance issues with specific queries in Django Rest - we have it set up in our dev environment and it helps spot when one request is generating lots of database queries or lots of Python calculation time

-Graham

1 Like

Currently memcached - I am trying to keep everything simple.

As for the “data flows” I have a CRON that runs management command which does new import every few hours from external API and some data cleaning.

I have implemented basic cache for the response and currenly most users get it super fast - however those who have to “prewarm” the cache so to speak still wait a bit longer.

I have been thinking whether it would make sense to “denormalize” the release dates which I need to annotate via subqueries but the data model is already complex and I might need to have the data separately for other queries (the release dates also have a couple of metadata like the region and whether the date is approximate or precise)

So perhaps it would make sense to move to Redis and have another CRON that would prewarm the cache? For this endpoint I realistically need 8 sets of data, which could possibly go up 3x but no more.

Thanks! I weren’t aware of this, I am using the Django Debug Toolbar just to see the time and number of queries.

Seems to me like that would be the ideal location in which to prewarm the cache. That might also be a location in which you create your denormalized version for query optimization. Howver, I’m not sure I see why you would want to make this a seperate cron job. (You might, I’m just not aware of all the implications of such a decision.)

1 Like

Yes, but I also occassionally change featured games which are part of the response and in general I think it is bit easier to manage to have separate commands for different things.

Will this work for memcached? Since the manage commands seems to be separate from the web app.

I agree, they’re logically independent functions, but they are also temporally linked. You can run both commands within the same cron job, the second to start upon the successful completion of the first.

They’re using the same settings.py file, right? That means they’re all initializing the Django internal environment the same way. They would all be configured to communicate with the same memcached daemon, and they all should be using the same functions to generate the keys. I’ve never tried it with memcached, but I know it works with redis.

TIL: I mixed up LocMemCache with Memcached :man_facepalming:

So since I have to do additional setup I think I will go with Redis.

Implemented the pre-warming with CRON after switching to Redis and the result is super fast response for all users:

My only issue now is which reply should be marked as “Solution”? :grimacing:

1 Like