code refactorized with @app.on_after_configure.connect/def setup_periodic_tasks@app.task as per latest celery guide documentation
xtreak has quit
wawrek has quit
fission6 has quit
erani
How are people's experiences with Redis and SQS? I can see some tickets in Github related to duplication of messages during shutdown of workers, but I was still curious to ask if this has been a common occurrence.
fission6 joined the channel
azerty1664
taskpool: u'delivery_info': {u'priority': 0, u'redelivered': False, u'routing_key': u'lively', u'exchange': u''} then the task is accepted and succeeded. None of rabbitmqctl list_queues counters are increased realtime. Seems strange
subho joined the channel
josuebc
azerty1664: I believe celery beat uses its own queue. One of the reasons you can only have one celery beat worker runnning at a time. I'm double checking but I think that queue is indeed not used on beat tasks.
You could use celery beat as a router. You fire up a beat task on a schedule and within that task you then schedule a task to a specific queue
erani: Redis as broker or Redis as result backend and SQS as broker?
I haven't used SQS as broker but I've talkted to people that have and the main downside is that the SQS protocol ismissing a lot of things for monitoring tasks
erani
josuebc: either as broker yeah
lirum has quit
oeeo joined the channel
azerty1664 has quit
oeeo has quit
marself joined the channel
jarthur has quit
marself has quit
marself joined the channel
josuebc
erani: Redis as broker can lead to some messages being dropped. SQS is more stable but the protocol is missing some functionality.
Redis is good for testing locally
Easy to setup
badloop
great for production as well in k8s :-)
subho has quit
clokep has quit
clokep joined the channel
lavalamp has quit
lavalamp joined the channel
no_maam
Hi, I'm currently trying to understand retries in celery. I have some workers which might reboot from time to time, so it is possible that a task is interrupted.
So the worker goes down and the connection to rabbitmq is simply lost. I would like to retry such tasks but at most 10 times.
What exactly am I supposed to configure in celery? I think late acks are useful, but I'm not sure how well the retries work there, since they will not throw an exception, they will simply disappear
erani
josuebc: okay, interesting. Thanks. What is the reason behind Redis possibly leading to some messages being dropped?
sunil_b joined the channel
sunil_bansal has quit
sunil_b has quit
jarthur joined the channel
clokep has quit
josuebc
erani: I am not 100% but I believe it has to do with the fact that redis works on memory and if that memory allocation is full then redis will drop messages. That could happen because Celery might send a lot of messages.
lavalamp has quit
lavalamp joined the channel
Catatronic has quit
Catatronic joined the channel
jdickers_ joined the channel
jdickers has quit
jarthur_ joined the channel
jarthur has quit
jarthur_ has quit
marself has quit
jdickers_ has quit
dylan joined the channel
clokep joined the channel
dylan
Does anyone have a good tutorial for running celery on kubernetes?