Django Channels StaticFiles error

Hi good morning in my consumer, when I am calling a websocket endpoint I am getting this error:

Application instance <Task pending name=‘Task-15’ coro=<StaticFilesWrapper.call() running at D:\projects\django_5\venv\Lib\site-packages\channels\staticfiles.py:44> wait_for=> for connection <WebSocketProtocol client=[‘127.0.0.1’, 52878] path=b’/ws/live-tracker/'> took too long to shut down and was killed.

the thing is I don’t think I am handling any 'staticfiles`, how ever this error comes up in the console.

can please point me to why this ‘messages’ is showing,

for clarification this is my code:

class LiveTracker(AsyncWebsocketConsumer):
    async def connect_to_websocket(self, uri, headers=None):
        async with websockets.connect(uri, extra_headers=headers) as websocket:
            while True:
                message = await websocket.recv()
                if message == "close_connection":
                    await websocket.close()
                    break
                await self.send(text_data=message)

    async def connect(self):
        await self.accept()
        session_cookie = await self.fetch_session_cookie()
        headers = {"Cookie": session_cookie}
        uri = "ws://xx.xx.xx.xx:xxxx/api/socket"
        await self.connect_to_websocket(uri, headers)

    async def fetch_session_cookie(self):
        url = 'http://xx.xx.xx.xx:xxxx/api/cookie'
        payload = {
            'email': 'myemail@email',
            'password': 'mypassword'
        }
        headers = {
            'Content-Type': 'application/x-www-form-urlencoded'
        }
        encoded_payload = urllib.parse.urlencode(payload)
        response = requests.post(url, data=encoded_payload, headers=headers)
        session_cookie = response.headers.get('Set-Cookie')
        return session_cookie
    
    async def disconnect(self, close_code):
         raise StopConsumer()

This is with runserver yes? Likely it’s just an artefact: you get this message when the event loop shuts down before the task has the opportunity to clean up properly.

If you’re able to reduce it to a minimal example, it would be possible to say more.

Hi,
yes this is with runserver.

I’m sorry this is the most simplified version, the endpoint requires the session cookie (for authentication), without it I cannot test.

so, in a production environment, will this issue persists.

thanks

Hi. I am having the same error. Did you solve it?

Same here, any idea what causes this?

I seem to be able to trigger this, when my javascript application issues multiple requests in parallel.

Django==5.1.1
python 3.12.6

Seems to only affect runserver. (production uses daphne, and has never any issues)

removing (disabling) any websocket related code, does not seem to help.

However removing ‘channels’ from my INSTALLED_APPS seem to ‘fix’ the problem.

So unused ‘channels’ (no code imports it).
Just adding it to INSTALLED_APPS → shows this hang, when we get multiple parallel requests to runserver.
Commenting out that single line in settings.py (removing ‘channels’ from INSTALLED_APPS and the problem no longer appears.

version info:
channels==3.0.5
channels-redis==3.4.1

NB: this hang occurs without triggering any url/endpoint that uses channels. Plain synchronous, db lookup, then render json urls.

This is an artefact of the way the development server serves static files during development.

It’s not a real issue: The auto-reloader is not giving the static files task time to exit cleanly when reloading the server. Nothing comes of this.

If someone wants to take the time to make it exits cleanly, that would be nice. It’s not urgent, per se, but it does cause people to wonder why it’s happening.

Well, it is a real issue. Its not just a confusing message.
It doesn’t seem to happen on auto reloading, but basically at any time, when parallel requests are coming in.
The runserver actually stops working when this happens. The program blocks and no longer serves any requests.

We encountered this problem today, when upgrading our application from Django 4.2 to Django 5.1 (and python from 3.8 to 3.12.6) This basically makes development impossible, it hangs within a few seconds.

We’re going to investigate if upgrading channels to version 4 fixes this problem.

the traceback, after this a CTRL-C and restart of runserver is required.

HTTP OPTIONS /api/1.0/orgs/1/schedules?published=true&active=true 200 [0.00, 127.0.0.1:59887]
HTTP GET /api/1.0/orgs/1/schedules?published=true&active=true 200 [0.08, 127.0.0.1:59870]
Application instance <Task pending name='Task-75' coro=<StaticFilesWrapper.__call__() running at .../lib/python3.12/site-packages/channels/staticfiles.py:44> wait_for=<Task cancelling name='Task-85' coro=<ASGIHandler.handle.<locals>.process_request() running at .../lib/python3.12/site-packages/django/core/handlers/asgi.py:185> wait_for=<Future pending cb=[_chain_future.<locals>._call_check_cancel() at /opt/homebrew/Cellar/python@3.12/3.12.6/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/futures.py:391, Task.task_wakeup()]> cb=[Task.task_wakeup()]>> for connection <WebRequest at 0x1135bdca0 method=GET uri=/api/1.0/orgs/1/zones/4 clientproto=HTTP/1.1> took too long to shut down and was killed.
Application instance <Task cancelling name='Task-75' coro=<StaticFilesWrapper.__call__() running at .../lib/python3.12/site-packages/channels/staticfiles.py:44> wait_for=<_GatheringFuture pending cb=[Task.task_wakeup()]>> for connection <WebRequest at 0x1135bdca0 method=GET uri=/api/1.0/orgs/1/zones/4 clientproto=HTTP/1.1> took too long to shut down and was killed.

Do you think I should file a ticket at https://code.djangoproject.com/ticket/ ?

I’m not convinced that’s the same issue.

The error from shutting down the event loop that’s raised by the static files handler is not likely the cause of you the block you’re seeing.

I’d advise creating a minimal reproduce from latest versions and then you can open an issue on the Django repo or the Daphne repo as you prefer. (Without such a reproduce, folks won’t be able to help).

Hm… you’re probably right.

It might somehow be related with #35757 (memcached.PyMemcacheCache reentrancy problem with ASGI-based runserver) – Django

If I use cache.backends.locmem.LocMemCache instead of a memcached backend, then all works fine. (in runserver), and none the problems reproduce…
The memcached setup seems to be working, most calls just work, looking in the memached daemon -vv output, all request are handled just fine.
Just when parallel requests are server by runserver → things hang on the Django/runserver side.

In production (daphne) all seems to work just fine always. (with memcached server)

1 Like

@carltongibson
I did some further debugging, these django runserver/memcached problems only appears when adding (without using) channels/daphne.

I created a minimal(polls) reproduce with latests versions. See #35757 (memcached.PyMemcacheCache reentrancy problem with ASGI-based runserver) – Django

As this may be ‘channels’ related, instead of Django. I’d like to ping you. Could you have glance at that ticket? Maybe it rings a bell?

NB: the problem is no longer as initially reported here. (No StaticFilesWrapper.call ()… stuff), but rather ‘runserver’ stops serving requests when accessed concurrent, when mixing in channels/daphne.