any good ways to deal with reliability during worker/instance termination when using a redis broker, or is the answer basically to use rabbitmq?
i'm sometimes getting celery to return tasks when the instance knows termination is coming, but not always, and then sometimes i have the opposite issue where the returned tasks get run too many times by other workers
subho has quit
subho joined the channel
subho_ joined the channel
subho has quit
dkh is now known as notdaniel
rholloway joined the channel
zetab_ joined the channel
[o__o] joined the channel
armyriad joined the channel
josuebc joined the channel
jgadelange has quit
nulltek joined the channel
pztrick joined the channel
twooster joined the channel
Pheimors joined the channel
monokrome joined the channel
jgadelange joined the channel
subho_ has quit
subho joined the channel
spinza joined the channel
subho has quit
subho joined the channel
rhollowa_ joined the channel
rholloway has quit
lavalamp joined the channel
ogny joined the channel
eN_Satwik_Kolhe_ joined the channel
kumar001 joined the channel
kumar001
need help flower in supervisor is not able to track my celeryd
I am new to celery
subho has quit
subho joined the channel
subho has quit
subho joined the channel
subho has quit
subho_ joined the channel
kumar001 has quit
bytespirit joined the channel
ogny has quit
ogny joined the channel
ogny has quit
ogny joined the channel
lavalamp has quit
subho_ has quit
ams__ joined the channel
flomko joined the channel
bakirelived joined the channel
dkh joined the channel
notdaniel has quit
Ergo joined the channel
chin-tastic joined the channel
bakirelived has quit
bakirelived joined the channel
malinoff joined the channel
malinoff has quit
flomko has quit
dkh has quit
flomko joined the channel
Mulugruntz joined the channel
sawdog joined the channel
Tawonga joined the channel
the_rat joined the channel
malinoff joined the channel
malinoff has quit
__Yiota joined the channel
bytespirit_ joined the channel
rholloway joined the channel
rhollowa_ has quit
malinoff joined the channel
malinoff has quit
malinoff joined the channel
malinoff has quit
subho joined the channel
subho has quit
rchan joined the channel
fission6 joined the channel
jdickers joined the channel
jwhisnant joined the channel
jwhisnant joined the channel
jwhisnant has quit
jwhisnant joined the channel
rchan has quit
eN_Satwik_Kolhe_ has quit
eN_Satwik_Kolhe_ joined the channel
rchan joined the channel
ThiefMaster joined the channel
ThiefMaster
with celery4 (message protocol v2), how can i modify the args/kwargs in a before_task_publish handler?
DLSteve joined the channel
previously i used this code which worked just fine: body['args'] = _CelerySAWrapper.wrap_args(body['args']); body['kwargs'] = _CelerySAWrapper.wrap_kwargs(body['kwargs'])
but now the body is a tuple and thus immutable... and while i can modify the task kwargs in place since it's a dict, i can't do that for args since it's also a tuple..
or is there some other (betteR) way to preprocess task arguments?
hrm, i guess i could override apply_async..
eN_Satwik_Kolhe_ has quit
rchan has quit
__Yiota joined the channel
rchan joined the channel
also, is there some plugin for celery that adds the email functionality back or do i really need to implement this myself?!
Asgaroth joined the channel
https://github.com/celery/celery/issues/3588 - there's no solution in there regarding how to log tasks where the worker gets killed with SIGKILL, e.g. by the kernel's oom-killer. however, celery 3.x did exactly this. how do i get back this functionality? on_failure apparently does not run in this case.
malinoff joined the channel
malinoff has quit
fission6 has quit
__Yiota joined the channel
jonesnc has quit
dalley has quit
beardedeagle joined the channel
Jon_R joined the channel
Jon_R
Hi guys. Are you aware that the http://www.celeryproject.org/ page is happily announcing to the world that you have had zero downloads in the last month? ;-)
fission6 joined the channel
__Yiota has quit
__Yiota joined the channel
__Yiota has quit
__Yiota joined the channel
bakirelived has quit
bakirelived joined the channel
jarthur joined the channel
rond_ joined the channel
rond_
Hi! Let's say i want to run 10 tasks, but only at most 4 parallely (at one time). Is the following scenario possible to (easily) achive with celery? 1) run task (it can be in one .sh file, it can be a few commands, I could adjust). If failed, rerun. So, if tasks 1,2,3,4 are running and for example task 1 fails, then i would want to rerun task1 before running next task (task 5)