Not really, I’m running Django server with malloc_trim(0) as I described in StackOverflow topic. As I keep only filename references to images in DB, all data coming in and out is JSON, all image processing is moved to AWS Lambda. This way memory usage goes up a little bit but sticks at 500-700 MB. I push some kind of an update at least once a week, as a result server starts fresh with lower memory consumption. I don’t know how things would work with 10 or 100 times higher traffic, though. As for now I can live with it and the async config works nice for me.
I’m also experiencing memory leaks, however it’s because I’m dumb and I don’t know how to code, i don’t know if i’m having leaks with the async. After i fix the memory leaks i’ll let you know if i have problems with the async itself, however, I’ll share you the configuration in order to run my server:
I tried I guess all config combinations for async project but it didn’t change anything regarding the problem.
I use Django 5.0 currently but also tried with 4.2 - same results.
I was using memory profiling, unfortunately this problem is deeper. It’s good for you to check if you have other memory leaks locally.
I invoked this function in suspicious places in the code to check memory usage:
def print_memory_allocation(place):
import tracemalloc
current, peak = tracemalloc.get_traced_memory()
current, peak = current / 10 ** 6, peak / 10 ** 6
print('Current and peak memory usage - {}: {} {}'.format(place, current, peak))
snapshot = tracemalloc.take_snapshot()
top_stats = snapshot.statistics('lineno')
print("[ Top 5 ]")
for stat in top_stats[:5]:
print(stat)
I really don’t know what could be causing this, I guess you’re already using some orchestrator like ECS or K8S if you’re working in AWS, if not I would recommend it. What loads of data do the functions operate with? Are you using the latest snapshot of the version of Django?
I don’t have any insight on how your code looks so I can’t help too much. I haven’t used tracemalloc, only memory_profiler, I don’t know how are you using it, it seems you’re only using it to get peaks. You could use memory_profiler in order to get a more detailed row by row memory usage increase with the @profile decorator in order to make sure this isn’t a problem with the python garbage collector.