Hello, what is the difference between apply_async and delay() ??
what if I want one task to be automatically added after a different task? Should I call the second task in my first task??
mattmcc
usr123: delay is a bit more pythonic in that you can pass args/kwargs to the task naturally. The tradeoff is it doesn't support additional arguments that affect how the task runs.
Here, other_task will be called if the add task succeeds.
They couldn't do link=other_task() because that would call it immediately. And just passing the function reference, link=other_task wouldn't let you specify arguments for the task.
avium joined the channel
hphuoc25 has quit
hphuoc25 joined the channel
Ergo joined the channel
hphuoc25_ joined the channel
hphuoc25 has quit
hphuoc25_ has quit
crooksey joined the channel
crooksey
josuebc: my datetime issue turned out to be a celery issue (which has now been patched)
Thanks for your help last week
hphuoc25 joined the channel
crooksey joined the channel
hphuoc25 has quit
iBurger21
I've been playing around with celery for some time now. It's awesome.
My problem right now is, that I don't know how I should handle the 'retrying' of tasks, in a non blocking way.
Would it be smart to put a list of "tasks to do" in a separate database?
.. I'm afraid I will end up with many workers doing the same task
fleetfox
you can pin workers to queues
and balance that way
iBurger21
fleetfox: is using "import inspect; active_task = i.active()", a good way to prevent duplicated tasks, according to you?
before starting with my tasks, i could see if the same task is already running.
.. alternatively: I make a special table in my DB, which holds: currently processing tasks.
fleetfox
why do you want to do that, that makes little sense to me
iBurger21
Thanks
I'm building a system to process RSS feeds
(everytime a new article comes in, I want to scan it)
Perhaps my job is still running, and I would scan the article another time. (I wanna stop that)
So my plan is: use inspect(), to check if an job is already running with that *specific* url, so I dont scan an article 2x.
good plan?
(apologies if this is all too stupid, i've never done anything like this before)
each page is different; I'm sure the jobs are unique in that sense
fleetfox
and why don't you want to run multiple jobs concurrently?
iBurger21
ahh.. damn, i have no experience with all this stuff. put i'm gonna fix it
thanks for pointing me to the cookbook
my worry is about "losing data" so to say
but its probably a foobar worry, i'm just gonna implement it
fleetfox
you should try describing your problem better since it sounds like XY
crooksey
I have run celery via a third party launcher for some years. I now want to get rid of this and just run "vanilla" celery. I have created a "celery_config.py" file which defines my application and beat schedule, what is the correct way to then launch/run the beat scheduler with the provided setting files? I have something like this: https://gist.github.com/crooksey/2faebf0570f4cd...
fleetfox
celert -A celery_config ?
crooksey
I tried: celery -A celery_config beat
And it ignored my settings, and gave an output for a different broker etc as specified in the config file