I have an ASGI Django (v3.1) application and MariaDB.
The problem is: DB drops idle connection after wait_timeout and after I open a page I get an exception:
MySQLdb._exceptions.OperationalError: (2006, 'MySQL server has gone away')
with following:
django.db.utils.OperationalError: (2006, 'MySQL server has gone away')
wait_timeout is default 28800
I set CONN_MAX_AGE=600, but it didn’t help.
What can be a root cause? How to fix?
Doesn’t anyone have an idea?
Ideas? Sure. Helpful ideas? Not-so-much
A brief review of the settings docs show a number of other possible settings that might be affecting this - possibly related to whether or not Django is opening the connection as an “interactive” session vs a “batch” session.
See things like:
- https://mariadb.com/kb/en/handlersocket-configuration-options/#handlersocket_timeout
- https://mariadb.com/kb/en/server-system-variables/#interactive_timeout
- https://mariadb.com/kb/en/thread-pool-system-status-variables/#thread_pool_idle_timeout
Depending upon how busy you’re expecting your site to be, and whether or not your database is on the same server as your applications, you could also try:
- setting CONN_MAX_AGE=0
- changing your connection from using a TCP socket to a unix socket
- setting CONN_MAX_AGE to something like 240. (Some of those settings above have defaults of 300 - if you set CONN_MAX_AGE to something less than that, you might be able to determine whether or not it’s one of those settings.)
Sorry - I know this isn’t much help, but maybe it’ll give you some ideas for further research. (I haven’t used MariaDB in more than 6 years.)
Ken
Thanks for the answer.
But the problem still present. I tried to set even CONN_MAX_AGE=5 , but it looks as if Django doesn’t use this parameter at all.
Unfortunatelly, I can’t try WSGI - my application won’t work this way I think. But may be I add some stubs… Who knows) Or maybe it’s reasonable to write a simple WSGI app, just for DB tests…
And I can’t switch to unix socket - DB is on another host.
I solved the problem!
The root cause:
At the moment (version 3.1) Django ORM is syncronous and asgiref.sync.sync_to_async
adapter is used. And here is the problem place.
This works always (for a some reason):
await sync_to_async(User.objects.all)()
And this works only for the first call and until DB closes the connection:
await sync_to_async(User.objects.get)(username=username)
Right after the connection is closed by DB such call leads to exception. Perhaps, this adapter keeps the connection open forever
The easiest solution (from my point):
Take this code from Django channells:
And just use it instead of sync_to_async:
await database_sync_to_async(User.objects.get)(username=username)
I hope this might be useful for someone who’s come across with such issue as I did.
And of course this all will be deprecated when Djago becomes fully async. Someday…
thanks, nice solution. i’m also working with an asgi application accessing orm. i guess views clean up their own connections, but asgi doesn’t. had been trying close_old_connections in different places with limitted success, but the solution you found seems great, and worked perfectly.
Late to the party, but I ran into this issue as well and we found the following relevant piece of docs:
If a connection is created in a long-running process, outside of Django’s request-response cycle, the connection will remain open until explicitly closed, or timeout occurs.
This is obvious when looking at the Django source code:
def close_old_connections(**kwargs):
for conn in connections.all(initialized_only=True):
conn.close_if_unusable_or_obsolete()
signals.request_started.connect(close_old_connections)
signals.request_finished.connect(close_old_connections)
The close_old_connections
or close_if_unusable_or_obsolete
aren’t used anywhere else in either django
or django_mysql
, so it’s obvious that CONN_MAX_AGE
is fully reliant on the request-response cycle. So if you have microservices that run on long-running django commands, or use ASGI, this doesn’t get called.
The way this can be fixed is by creating a custom database ENGINE
by creating a new python module (with an empty __init__.py
) and creating the following base.py
file:
import time
from django.db.backends.mysql.base import DatabaseWrapper as MySQLDatabaseWrapper
class DatabaseWrapper(MySQLDatabaseWrapper):
def close_if_health_check_failed(self, *args, **kwargs):
if self.close_at is not None and time.monotonic() >= self.close_at:
self.close()
MySQLDatabaseWrapper.close_if_health_check_failed(self, *args, **kwargs)
And then refer to whatever python module path it’s saved under as DATABASES
.ENGINE
. This code steals the relevant part of the existing close_if_unusable_or_obsolete
function and piggybacks this onto close_if_health_check_failed
which gets called before a database cursor is created, or a transaction is started. Effectively, it will recycle any connections according to your CONN_MAX_AGE
setting every single time the DB is accessed, instead of before and after every request. So cleaning up connections no longer depends on the request-response cycle.
Trying to run your suggested code on the most recent version of Python (3.12.4) and Django (5.0.7) seems to result in an infinite recursion error.
This one worked for me (Django 5.1.2):
from django.db.backends.mysql.base import DatabaseWrapper as BaseDatabaseWrapper
class DatabaseWrapper(BaseDatabaseWrapper):
def ensure_connection(self):
if self.connection:
if not self.is_usable():
self.close()
super().ensure_connection()