Server Time Out when creating a CSV file from Django model

Hello Guys,

I am actually creating an API so that I can get a CSV file of the Django models that are stored in the DB.

The problem I am facing is that when I give a large time period (i.e to_datetime - from_datetime), the API usually gets timed out. The API works out for smaller time periods.

What should I do so that I can get CSV file for large data (50 k+ Django model objects)?

Kindly help me in optimizing the code or telling me the mistakes which I am doing such that I can get CSV files for large data sizes.

items = EarthPodData.objects.filter( datetime__gte = from_datetime ).filter(
                datetime__lte = to_datetime)

response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename="export.csv"'
writer = csv.writer(response, delimiter = "," )

writer.writerow(['earth_pod_id','datetime', 'datetime_pod', 'atmos_temperature', \
             'atmos_relative_humidity', 'atmos_pressure', 'soil_temperature', 'soil_relative_humidity', 'soil_moisture_2cm',  'soil_moisture_5cm', 'battery_voltage', 'light_analog'])

for o in items:
    writer.writerow([o.earth_pod.pod_id, o.datetime, o.datetime_pod,     o.atmos_temperature, o.atmos_relative_humidity, o.atmos_pressure,o.soil_temperature, o.soil_relative_humidity, o.soil_moisture_2cm, o.soil_moisture_5cm, o.battery_voltage, o.light_analog ])
 return response

If there are no limits to be applied to these requests, I would suggest running these extracts as a background task via something like Celery.

You could identify at what size this begins to become a problem, and check the size of items before doing the actual processing. If items.count() is larger than some pre-determined value, run the task in the background, creating a file to be retrieved later.

1 Like

Thank you. I will try Celery as you have suggested.