When are db connections closed?

Because of the issue I described in this thread, I have been taking a look at how Django handles opening and closing of db connections.

In particular, when a db connection is opened, I’m trying to track down when and where it gets closed. I want to understand how connections are treated in relation to the CONN_MAX_AGE setting.

In django.db.backends.base, method connect, I can see that the CONN_MAX_AGE setting is used to set a value close_at on the returned wrapper object for the connection.

Then, there’s a method called close_if_unusable_or_obsolete, in which the connection’s close_at is evaluated to see if the connection must be closed.

Per my understanding, that method is registered as a signal handler in django.db.init.py, and runs every time a request starts or finishes.

It appears to me the method is always executed synchronously, and its execution depends on there being a new request. If no new request arrives, no old connections are closed, even if their age is past the max setting. Is this correct?

Anyway, here’s what I tried: using sqlite as my db backend and ATOMIC_REQUESTS=True, I added a debug print inside of the connect method and one inside of the the close method, to try and see when the opened connections get closed.

With CONN_MAX_AGE set to 0, what I observed was that, for each request to my application, one or more connections got opened, and they all got immediately closed after the request was served.

Here’s the interesting thing though: with CONN_MAX_AGE set to anything greater than 0 (I tried with many values including 1), what happened was I didn’t see anything printed to the console. Inspecting the way the signal handler is registered, I would imagine it’d get called on the next request and close the older connection, but the method doesn’t get called.

Somehow though, printing the number of connections from this line (by simply counting the iterations of the loop) always showed one active connection, so either the old one was being re-used, or it was being closed from somewhere else and a new one was opened. Either way, could somebody please help me clarify when and where the connections get closed? I feel like this could help me solve the issue I described in the thread I linked.

You are correct, you’ve found the right source.

You’re also correct that if no requests are received, this doesn’t get run.

I don’t think there’s anything else to clarify here - I think you’ve summarized it correctly.

I also think you’re right that if you don’t have regular connections being made to your Django app, these connections are going to hang around.

1 Like

The only thing I don’t understand is: why am I not seeing the debug prints I put inside of the of the close method when the setting has a value greater than zero? Put prints all the way from close_if_unusable_or_obsolete down to close and nothing gets executed apparently, when there’s a new request. All that’s printed is what I put inside of the method to get a new connection.

I know you’re using Daphne and websockets here - keep in mind that moving data through a websocket is not a request and will not trigger the max-age check. If you have infrequent but long-running connections, this behavior could be expected. (Also, if this is a “production-quality” deployment where your web server or CDN handles static files, that reduces the number of requests your application will see as well.)

<opinion type="unfounded" evidence="none">
I believe that what you’re encountering may be related to djangochannelsrestframework library.

I’ve read just enough about it to form the opinion that it appears to me to be a less-than-robust pattern for the construction of a Channels application.

Personally, I wouldn’t be comfortable mixing my api handlers with my websocket handlers. My websocket listeners handle the websocket connections only. All processing and functionality within the application is elsewhere. I certainly don’t want every websocket client opening up its own database connection that never gets closed because the connection is a long-running connection.

I’ll keep in mind the rest of your post and certainly investigate djangochannelsrestframework, but the behavior I described here, like I said in the initial post of this thread, happened on sqlite and the local development server, not with Daphne in production (Daphne is related to the issues we’re discussing over in the other thread).

Yep, my mistake - I’m conflating the two topics.

Are you still running your channels-based app here, using the Channels version of runserver? Or is this a regular Django app using the standard Django runserver? (Which implies a virtual environment without channels being installed, because channels overrides the standard Django runserver - and still uses Daphne.)

Regarding PostgreSQL vs SQLite, that’s potentially a different issue. Sqlite is a “file-based” database used by opening the file. There are no “connections” to speak of needing to be managed. The file would be opened and a file handle would be passed around as needed. (Having said that, that wouldn’t matter at this level, that should all be handled in the lower-level libraries. However, it is a significant difference.)

Still using channels. That’s the same exact project and code base, just run locally.

Thanks for clarifying. I agree this probably doesn’t matter though, as with CONN_MAX_AGE=0 the close method is still being called even with sqlite.

Ok, so it still can be Daphne / DjangoChannelsRestInterface related. All you’ve really changed here is the switch from PostgreSQL to Sqlite3.

I have verified that if I run the normal Django runserver on a standard Django (non-channels) app using Sqlite3, it does run “close_old_connections”, regardless of the CONN_MAX_AGE setting, which then does call close_if_unusable_or_obsolete. This happens on every request.

However, the connection itself does not call close unless either CONN_MAX_AGE is 0 or has expired. (Logically the same since the code effectively treats an age of 0 as “now”, and so is always expired.) If the connection has not expired or if CONN_MAX_AGE is set to none, the call to close is never made.

The function close_if_unusable_or_obsolete is called on my installation too, regardless of the CONN_MAX_AGE setting. The problem in my case is that, if CONN_MAX_AGE is greater than zero, that function never decides to close the connection.

In fact, printing the close_at attribute of the connection object will output a number that’s about CONN_MAX_AGE in the future, so the connection isn’t “old” and therefore doesn’t get closed.

It’s almost as if, by the time Django gets there, there’s already a new connection being tested for age, and not the one I had previously opened. The old one kinda just disappears, never getting closed. Which is even weirder considering if I count the open connections, it still says there’s only one.

Does this not happen to you? Do you think here is where channels might be interfering?

You could probably check to see which connections are being referenced by printing the object id. It would be interesting to see if the expiration time is being updated or if you’re always getting new connections.

No, I don’t see that behavior in my test Django project. Connections with CONN_MAX_AGE > 0 all call close on the first request after that time window. I’ve never seen that value change.


Daphne & Channels doesn’t close connections yet by “design” (there is an open ticket for it). Basically in Daphne you get a new “thread” (not exactly but a new async context) and that counts as “thread” as far as Django is concerned. Due to that Django will never reuse such a “thread” and as such connections with a max age > 0 are never closed. To fix this we need to implement a connection pool in Django.


Does this mean that an application which uses Channels will increase the number of connections to the db monotonically with time, until some limit (on the db side) is hit, never decreasing that connection count?

Assuming Conn max age is anything > 0 and the db does not close idle connections eagerly then sadly yes

If you are able to take a look at this thread: OperationalError: sorry, too many clients already do you believe there’s a chance this could be the cause of the issue described there?

Yes, this certainly can be the reason for the issue you are seeing

Well, that’s a step ahead already in finding the cause of the issue!

Until the issue is corrected on Channel’s side, is there anything I can do to manually close the connections from my consumers? I wouldn’t want to just blindly call close_old_connections() when I detected no connected clients (assuming there is even a reliable way to detect how many connected users I have to my consumer); is there anything smarter that can be done?

Well make sure CONN_MAX_AGE is set to 0, this will close connections at the end of the request. Then add a pgbouncer/pgpool to reduce the load of connection creation on the db server.