DB optimisation

Hello,

I have an application that uses the Django rest framework to expose API endpoints, and the application is receiving ±1000 requests per second (not sure about the number).

The Django app and MySQL DB were both in shared web hosting, but since the beginning of this week, the server is experiencing high CPU issues, and after TSing found that the python processes are what causing the issue. So I moved the app to a 2 vCPU, 4G Ram, and SSD hard disk VPS.

After moving all data and doing the needed configuration, the app is running but another high CPU issue appeared but this time by Mysqld service, and on avg, it is 150%.

When checking the mytop command output, the qps is varying between 100 and 200, while in fewer traffic times it is 65 ( and I’m not sure if it should be less during fewer traffic times given that the traffic has significantly dropped)

All requests are looking for the same data and checking if there is an update, I was thinking about changing the methodology to webhook based but the data on my server updates every 30 seconds at most so I thought it won’t do any better.

Is there anything that can be added to optimize the database queryset? or do I need to increase the resources (CPU)?

You don’t mention the size of the database(s) involved, but I’d start with looking at memory utilization. If possible, you’d really like to have enough memory such that the entire database + indexes could be cached in the database’s internal buffers. From there it becomes an issue of evaluating your database schema and indexes from within the context of the application.

Superficially, if you’re handling that many requests on a single instance, you may need to go a lot higher than that. But any specific remedial steps should be as a result of a targeted investigation of the performance profile of the system as a whole.

Keep in mind that there’s always a bottleneck. There’s going to be one (or more) elements of the stack that limit overall throughput. As you change configuration in one area, you may create a situation where you haven’t removed a bottleneck, you’ve just moved it somewhere else. There’s always a bit of a balancing act involved.

1 Like

If you’re getting 1000 requests per second (I understand it’s a guess) and the data changes every 30s, I’d consider caching the response, then when the write occurs, bust the cache. Easier said than done, but eliminating ~30,000 requests from generating database hits in a 30s window might be worth it without having to change how clients use the API.

1 Like

Thank you for your response, this sounds promising.

Any idea on how to do this? I mean caching the response and busting it. is it like the ordinary cashe?

Thanks.

Yes, it should be like the ordinary cache. DRF is built on top of django so you can use a lot of the same functionality (DRF caching docs).

How to implement it depends on your requirements. For example, how soon after the data is updated do you need the cached responses to be updated? If it’s immediately, you may have to implement the caching mechanism yourself using the low level API. If it can be old by a few seconds, you could use an expiration of 5 or 10s. That would guarantee that in the worst case, users would see stale responses up to 5 or 10s respectively.

Thank you very much, will test it.