Hi all. I have a question about file uploads.
As this is a feature used by more or less every webserver, I feel that the question might be a bit stupid but I hope you can point me in the right direction.
When I upload large files, it seems that the full request is saved into memory and parsed first, before the upload handlers are called.
This results in an OOM kill of my application if the memory is not sufficient, regardless of if the upload handler would write the file directly into the filespace or not.
I tested it with a docker container limited to 2G RAM and uploading a 6GB file using either the default upload handlers or my own one and also using rest_framework, GraphQL or directly the Django Admin Panel.
Also, even if memory is sufficient, it takes a long time as the file is first read into memory, waits until the upload has finish, then the handler is called and the file is written to disk.
I’ve expected that the file would be written to disk as soon as the upload starts.
Is this an intended behavior - how can I upload large files when I have only limited memory resources?
The upload needs to be within one request/transaction and without any additional client-side libraries (like JS code) as the endpoints are provided as an API to third party tools which use it e.g. by curl.
I am very grateful for any help.