Signal to execute call_command with Apache wsgi

Two suggestions that basically boil down to “Do exactly and only what the First Steps guide shows.”

Don’t specify any queue names - let Celery use its defaults.

Use the delay function instead of apply_async.

Don’t over-complicate things until you need to.

I used the queue name to see in the RabbitMQ monitor from where the requests are coming from. If I use delay() function I will not have the option to name the queue, am I correct?

Correct, and in 99+% of all routine cases, you don’t want - or need - to do that. You can still see the queue names, they’ll just be the Celery-generated names.

Again, don’t over-complicate this while you’re still in the process of trying to get it to work. Take the simple approach first.

You might be right. I appreciate your help.

Let me remove the name and use delay(). The main goal is to make it working and later I can “complicate” if it’s needed.

I removed delay() and the queue name.
Here is the current result:

rabbitmqctl list_queues name messages messages_ready messages_unacknowledged
Timeout: 60.0 seconds ...
Listing queues for vhost / ...
name    messages        messages_ready  messages_unacknowledged
celery  0       0       0
worker1@hostname.celery.pidbox      0       0       0
celeryev.4f3993f7-0952-4400-adf7-c1da94fe582d   0       0       0
deb9f174-92c7-3508-b637-8cabf5213496    1       1       0

I see that the task generated by Django came as:

deb9f174-92c7-3508-b637-8cabf5213496    1       1       0

and is not processed :frowning: Celery worker is working under name:

worker1@hostname.celery.pidbox      0       0       0

Please post:

  • All the Celery-related settings in settings.py
  • Your current Celery task definition
  • The complete view that calls the Celery task
  • The full command line used to run the worker.

Celery settings in celery.py

from celery import Celery

app = Celery('app_celery',
             broker='amqp://admin:password@hostname:5672/',
             backend='rpc://',
             include=['app_celery.tasks'])

app.conf.update(
    result_expires=3600,
)

if __name__ == '__main__':
    app.start()

Celery task definition in file tasks.py

from app_celery.celery import app
from django.core.management import call_command


@app.task
def app_scrapy_library_number(number):
    call_command('app_scrapy_library_number', '--number', number)

I call the Celery task from signal, because I need it only when new model object is being created. Here is the code:

@receiver(post_save, sender=Maker)
def app_scrapy_library_number_handler(sender, instance, created, **kwargs):
    if created:
        app_scrapy_library_number.delay(instance.number)

And finally the full command line I used to run the worker in background

# celery multi restart worker1 -A app_celery -l INFO

In tasks.py I use command because the command run scrapy script.

Ok, this is at the point where you now need to move on to the Django-specific configuration issues. Your next step is to read First steps with Django — Celery 5.3.6 documentation to see what adjustments need to be made to what you have here.

:slight_smile: So you see the error and it it “primary school” one ? :slight_smile:
Let me read the documentation you sent me. I appreciate your help because I am learning a lot.

No, it’s just that this is a multi-step process due to the way the docs are organized. As fragmented as it may appear, this is the only way I know of to work through the complete setup.

1 Like

I have reorganized my project structure accordingly to Celery documentation.

Here is my current structure:

- django/
  - manage.py
  - nweb/
    - __init__.py
    - settings.py
    - celery.py
    - urls.py

and my celery.py file looks like that:

import os

from celery import Celery

# Set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'nweb.settings')

app = Celery('nweb')

# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
#   should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')

# Load task modules from all registered Django apps.
app.autodiscover_tasks()


@app.task(bind=True, ignore_result=True)
def debug_task(self):
    print(f'Request: {self.request!r}')

I also put Celery configuration to setttings.py as follows:


# Celery
CELERY_BROKER = 'amqp://admin:password@hostname:5672/'

and now when Django create task it does not go to RabbitMQ at all (not visible in RabbitMQ monitor) :frowning:

RabbitMQ is working as before:

systemctl status rabbitmq-server
● rabbitmq-server.service - RabbitMQ Messaging Server
     Loaded: loaded (/lib/systemd/system/rabbitmq-server.service; enabled; preset: enabled)
     Active: active (running) since Tue 2023-12-12 17:33:58 CST; 3h 29min ago

I will appreciate for any hint or suggestion what is wrong this time that even the task not go to RabbitMQ :frowning:

Even running worker for celery does not work now :frowning: Any suggestion?

# celery multi start worker -A app_celery -l INFO
celery multi v5.3.6 (emerald-rush)
> Starting nodes...
Usage: python -m celery [OPTIONS] COMMAND [ARGS]...
Try 'python -m celery --help' for help.

Error: Invalid value for '-A' / '--app':
Unable to load celery application.
The module app_celery was not found.
        > worker@n002: * Child terminated with exit code 2
FAILED

I’ve pulled up one of our projects that use Celery - and I notice a couple of differences.

We have defined our tasks using the @shared_task decorator as shown at Using the shared_task decorator.

The -A app parameter appears before worker in the command line.

For initial testing purposes, don’t use the multi or restart parameters. Again, start with the minimum requirements before you start adding things, and running the worker in a console session where you can see the output is helpful when getting started.

Also, to ensure that you have the “infrastructure” side of this correct, you might want to try downloading and running the example program that is referenced on that First Steps with Django page. (The note for this appears directly above this link: Extensions)

(Side note: There’s another difference - we use Redis as the broker, not amqp. It’s been a while since I’ve used it, so I’m not going to be much help there if that is the issue.)

I did not mention I added the @shared_task decorator to my task so it looks like:

tasks.py

from celery import shared_task
from django.core.management import call_command


@shared_task
def app_scrapy_library_number(number):
    call_command('app_scrapy_library_number', '--number', number)

and my file init is as follows:

# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as app_celery

__all__ = ('app_celery',)

Should it be in init.py located under /django/nweb or /django/app_module ???

It should be in the same directory as your celery.py file. That import statement is a “relative import”, which means it’s going to be looking for a celery.py file in that same directory - both of which should be in the nweb directory.

And following your recommendation I run successfully the worker by command:

celery -A celery worker -l INFO

Side note: For testing/diagnostic purposes only, you might want to run it with DEBUG logging instead of INFO. While you’re trying to get things straightened out, there may be more valuable information presented that way.

1 Like

I am wondering why the tasks were being added before but now they are not going to RabbitMQ at all.
I did not change a lot, just the structure but the call of the task is the same (with @shared_task decorator now)