I used the queue name to see in the RabbitMQ monitor from where the requests are coming from. If I use delay() function I will not have the option to name the queue, am I correct?
Correct, and in 99+% of all routine cases, you don’t want - or need - to do that. You can still see the queue names, they’ll just be the Celery-generated names.
Again, don’t over-complicate this while you’re still in the process of trying to get it to work. Take the simple approach first.
Ok, this is at the point where you now need to move on to the Django-specific configuration issues. Your next step is to read First steps with Django — Celery 5.3.6 documentation to see what adjustments need to be made to what you have here.
No, it’s just that this is a multi-step process due to the way the docs are organized. As fragmented as it may appear, this is the only way I know of to work through the complete setup.
import os
from celery import Celery
# Set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'nweb.settings')
app = Celery('nweb')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django apps.
app.autodiscover_tasks()
@app.task(bind=True, ignore_result=True)
def debug_task(self):
print(f'Request: {self.request!r}')
Even running worker for celery does not work now Any suggestion?
# celery multi start worker -A app_celery -l INFO
celery multi v5.3.6 (emerald-rush)
> Starting nodes...
Usage: python -m celery [OPTIONS] COMMAND [ARGS]...
Try 'python -m celery --help' for help.
Error: Invalid value for '-A' / '--app':
Unable to load celery application.
The module app_celery was not found.
> worker@n002: * Child terminated with exit code 2
FAILED
The -A app parameter appears before worker in the command line.
For initial testing purposes, don’t use the multi or restart parameters. Again, start with the minimum requirements before you start adding things, and running the worker in a console session where you can see the output is helpful when getting started.
Also, to ensure that you have the “infrastructure” side of this correct, you might want to try downloading and running the example program that is referenced on that First Steps with Django page. (The note for this appears directly above this link: Extensions)
(Side note: There’s another difference - we use Redis as the broker, not amqp. It’s been a while since I’ve used it, so I’m not going to be much help there if that is the issue.)
I did not mention I added the @shared_task decorator to my task so it looks like:
tasks.py
from celery import shared_task
from django.core.management import call_command
@shared_task
def app_scrapy_library_number(number):
call_command('app_scrapy_library_number', '--number', number)
and my file init is as follows:
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as app_celery
__all__ = ('app_celery',)
Should it be in init.py located under /django/nweb or /django/app_module ???
It should be in the same directory as your celery.py file. That import statement is a “relative import”, which means it’s going to be looking for a celery.py file in that same directory - both of which should be in the nweb directory.
Side note: For testing/diagnostic purposes only, you might want to run it with DEBUG logging instead of INFO. While you’re trying to get things straightened out, there may be more valuable information presented that way.
I am wondering why the tasks were being added before but now they are not going to RabbitMQ at all.
I did not change a lot, just the structure but the call of the task is the same (with @shared_task decorator now)