Signal to execute call_command with Apache wsgi

When you try to call a task, are you seeing any output being generated by the worker process?
(If so, what does the output from the worker task look like?)

When everything’s working correctly, it’s highly unlikely that you’re going to see the task in the queue itself because it’s going to be dispatched almost immediately.

Unfortunately I don’t know how to call the task manually :frowning: I call the task via Django as follows:

from app_library.tasks import app_scrapy_library_number

@receiver(post_save, sender=Maker)
def app_scrapy_library_number_handler(sender, instance, created, **kwargs):
    if not created:
        app_scrapy_library_number.delay(instance.number)

How I can crate the task manually?

Side note: You could either create a view that calls that function, or you could create a custom management command.

You might also want to create a trivial function as a shared task, like the example code shows - again to demonstrate the functionality of Celery itself.

But somehow I’m assuming you’re testing this by doing something that causes this function to be called. That’s what I’m referring to here.

Oh yes, when I call the command via shell it works perfectly, no issues:

# python3.11 manage.py app_scrapy_library_number --number=324699606

It does what I need without error.

That’s not what I’m referring to.

I’m talking about creating a management command that calls your Celery task using .delay to demonstrate Celery being functional.

Ok, I can try that right now. Let me check.

I have created the test command as follows:

from django.core.management.base import BaseCommand
from app_choice.tasks import app_scrapy_library_number


class Command(BaseCommand):
    def handle(self, *args, **options):
        app_scrapy_library_number.delay('8571699606')

but no success :frowning:

I call it with command:

# python3.11 manage.py app_celery_test_command

What seems weird to me is that RabbitMQ shows 3 different queues. I created only one.

# rabbitmqctl list_queues name messages messages_ready messages_unacknowledged
Timeout: 60.0 seconds ...
Listing queues for vhost / ...
name    messages        messages_ready  messages_unacknowledged
worker@n002.celery.pidbox       0       0       0
celeryev.54b244b1-7d59-4733-ad40-d51c3f4934d8   0       0       0
celery  0       0       0

I created only

worker@n002.celery.pidbox       0       0       0

I still fighting with the Celery :slight_smile: I will not give up :slight_smile:

This time I followed your recommendation to use the simple example from the documentation. So I did, and run the simple add() function from shell command line.
And here is the error:

>>> from app_choice.tasks import add
>>> res = add.delay(2,3)
>>> res.get()
Traceback (most recent call last):
  File "<console>", line 1, in <module>
  File "/n/django/venv/lib/python3.11/site-packages/celery/result.py", line 251, in get
    return self.backend.wait_for_pending(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/n/django/venv/lib/python3.11/site-packages/celery/backends/base.py", line 755, in wait_for_pending
    meta = self.wait_for(
           ^^^^^^^^^^^^^^
  File "/n/django/venv/lib/python3.11/site-packages/celery/backends/base.py", line 1104, in _is_disabled
    raise NotImplementedError(E_NO_BACKEND.strip())
NotImplementedError: No result backend is configured.
Please see the documentation for more information.

Finally I think I made it working :slight_smile:

The issue was trivial
In my settings I had

# Celery
CELERY_BROKER = ...

instead of

# Celery
CELERY_BROKER_URL = ...

Currently the task are added correctly to RabbitMQ queue and to perform them I have to run the worker with a command:

celery -A app_celery worker -l INFO

How to set up worker to perform all incoming tasks automatically ?

I have never had to do anything “special” to get it to work. Once everything was set up correctly, it just works.

I think I’m not understanding what you’re asking here.

I am sorry, I was not clear and my note was misunderstood. I mean that when Django or myself run a task, the task goes to RabbitMQ and I see it waiting for execution until I run manually command

celery -A app_celery worker -l INFO

You mentioned that it should always go automatically, so I assume something is still wrong in celery configuration I made unfortunately.

Nope, nothing wrong. Now it’s time for the next step - setting it up for production.

Quoting directly from the docs at Running the celery worker server

In production you’ll want to run the worker in the background as a daemon. To do this you need to use the tools provided by your platform, or something like supervisord (see Daemonization for more information).

This means that the Celery worker needs to be a persistent process that always needs to be running. Celery doesn’t start the process - it’s expecting the process to be active. You need to set it up using whatever mechanism desired to run and manage this type of process.

Understood. I am using Debian 12 at the moment, so I will figure out now how to set up the process to run permanently. Thank you so much Ken. I am learning a lot. Look, yesterday I evend didn’t know what Celery is and how it works and today I made my own Celery configuration, cleaned up my Django code from “weird signals” and grasp main knowledge how to execute external commands out of Django. It’s all because of You, your knowledge you are shering with others on that forum. Is there a chance I can gratitude of that in form of coffee fund transfer?

I am following the documentation “How to run the worker as a daemon”

https://docs.celeryq.dev/en/3.1/tutorials/daemonizing.html#init-script-celeryd

I see I have 3 options:
a) celeryd
b) celerybeat
c) systemd

But in my Debian 12 I do not any configuration files for:
a) /etc/default/celeryd
b) /etc/default/celerybeat or /etc/default/celeryd
c) /etc/conf.d/celery

More likely I would like to chose an option that does not need any additional installation. I assume option b) celerybeat needs to have celerybeat installed on.

What in your opinion is the best option? Which one is the most robust?

You have multiple options, but these aren’t it.

The options that that page are identifying are:

  1. Init script
    • What the docs are showing you on that page is that you can define init scripts for both celeryd and celerybeat. (Both Celery and Celery Beat are processes that need to be persistent and run independently of your Django process)
  2. Systemd
  3. Supervisord
    (and the others - and there are options they don’t show, like runit.)

They’re all good. They’re all robust. They’re all trusted solutions.

There are pros & cons for each, and the one you choose should be determined (in part) by how you want to manage your environment.

For example, starting and stopping a process using systemd usually requires you to be root. On the other hand, you can configure supervisord such that a regular user can start and stop processes - a lot more easily than doing the same with systemd.

Correct. You need to create the appropriate file for your configuration.

Again to be clear - Celery Beat is not a solution here. It is another tool in the Celery toolbelt.

<opinion>
This shouldn’t be part of the decision-making process. Doing an installation is a one-time event. Managing your system once it’s up and running is an on-going process.
</opinion>

Staying with what’s installed generally means systemd.

That is mighty kind of you to say and to offer, but I’m a “pay it forward” kind of person, not “pay it back”. It would mean a lot more to me if you would just keep your heart open toward helping others that may benefit from your assistance - regardless of the need.

1 Like

Thank you. I think even my process of learning and my participation on the forum can be beneficial and helpful for others. Maybe someone in the future will go thru Celery installation from scratch knowing nothing of the solution at the beginning. I appreciate your help and shearing knowledge on the forum.

I think I will stay with systemd since it’s a part of Debian I use currently.
I do not like systemd due to it is against the philosophy of Debian but I will use it anyway.

Let me read now a little bit how to make Celery to be run permanently by systemd.
What solution did you use in your previous projects?

Does it mean it is available in Celery I have installed and just need a little of tweak to use it? Maybe that’s the best option for me instead of systemd?

Pretty much all the above. I deploy projects to multiple environments and platforms.

I know I’ve got multiple projects using supervisord, multiple projects with the components running in docker containers (so it’s docker managing those processes), 3 (4?) running under systemd, and 1 that was running under runit (that was an experiment to compare it with supervisord).

All other things being equal, my preferences are docker and supervisord. I prefer an environment where I can have people manage deployments without being root. I think now the only time I would deploy directly in systemd would be a very limited environment (SBC - think Raspberry Pi) where the overhead of another manager could create an issue.