I’m working on a project with logging to database. To ensure the logging does not slow down the request handling, I would like to make use of QueueHandler. Thus, I need a place (startup) where I can setup the logging to bind with QueueHandler and my own database handler. I also need a place (shutdown) for me to issue “listener.stop()” to save the remaining message to database.
How should I achieve such implementation? Thanks.
Alternative thought - The generally-recommended mechanism within Django to perform “long-running” tasks outside the request thread would be to use celery. It might be worth creating a separate celery task to do the logging. The idea here is to separate that database activity to a (or multiple) separate processes. If performance is that critical, this would also give you the option of splitting that logging out to a separate machine and/or separate database. (There are a lot of factors involved here.)
But first, do you have any evidence that this is a necessary step? Have you done any profiling to see what effect on the overall request time is spent by the logging component? It’s a common situation for developers to get concerned about performance issues in areas that don’t significantly affect the overall response.
Anyway, if you’ve decided that this really is the best way for you to do this, see the AppConfig module.
The shutdown is a different issue. How critical is it that your logging is complete? Is it going to be able to handle a “hard stop”? (e.g. kill -9 on your container application? Server power outage?) There are many circumstances where an application can ‘die dirty’ (i.e. without going through a proper shutdown), is that a factor here? That should also be considered while you’re looking into this.
1 Like