I set the CELERY_DEFAULT_QUEUE option to a queue, and then unset it
but now all tasks go to that queue
the set up is something like this: I have a machine running `celery multi start ...` that serves as the 'default' queue (i dont have a -Q option set)
and then I have another machine running `celery multi start ... -Q my_queue`, which I only want to run specific tasks from
previously, it was all working as expected: if I specified the queue using send_task(queue='my_queue'), then the task would execute on that queue, and all other tasks would go to the default queue
now, even after having unset the CELERY_DEFAULT_QUEUE option, tasks get routed to BOTH the default queue and my_queue
using redis as a backend... any reason that this might be happening?
igors joined the channel
igors
hello. is there a doc page or something explaining how the 'pidbox' queue is used?
aib_233 has quit
bobbyr_ has quit
bobbyr_ joined the channel
brockhaywood joined the channel
l0gik joined the channel
bobbyr_ has quit
infecto has quit
brockhaywood has quit
brockhaywood joined the channel
rpkilby has quit
domino14 joined the channel
VooDooNOFX
igors: not that I know of. What do you want to know about it?
igors
VooDooNOFX: I'm having a issue using a beanstalkd backend, it only consumes jobs every 10 seconds. It timeouts waiting on that 'pidbox' queue
VooDooNOFX
bobbyr_: can you show me your celeryconfig.py for this server?
igors
turns out it's a limitation of the beanstalkd backend
it's doesn't do multiplexed i/o
VooDooNOFX
igors: pidbox is used for message broadcasting in celery control. I've never used beanstalk though.
does beanstalkd support fanout exchanges?
igors
VooDooNOFX: i'm not sure, we just use the basic, I'm not really an expert...
It should disable the use of the pidbox queue entirely, but leave you in a state that you can't use celery control's sub-commands at all
domino14 joined the channel
domino14 has quit
gnoze5 has quit
l0gik has quit
igors
VooDooNOFX: cool, good to know. thanks!
VooDooNOFX
igors: however, I'd just use a real message queue (like rabbit) if I were you.
domino14 joined the channel
domino14 has quit
l0gik joined the channel
kaakku joined the channel
glassresistor joined the channel
cjhubert has quit
hamiltont has quit
hamiltont joined the channel
igors has quit
hamiltont has quit
hamiltont joined the channel
kaakku has quit
sulo has quit
glassresistor has quit
rickmak joined the channel
glassresistor joined the channel
rahatm1 joined the channel
rickmak has quit
rickmak_ joined the channel
rahatm1 joined the channel
boris317 joined the channel
glassresistor has quit
domino14 joined the channel
l0gik has quit
domino14 has quit
malinoff joined the channel
brockhaywood has quit
rickmak_ joined the channel
rickmak_ has quit
rickmak_ joined the channel
kaakku joined the channel
Debnet joined the channel
JayFK joined the channel
zz_CyberJacob is now known as CyberJacob
atomekk joined the channel
CyberJacob is now known as zz_CyberJacob
myth_ has quit
domino14 joined the channel
glassresistor joined the channel
domino14 has quit
glassresistor has quit
pylua has quit
JayFK has quit
JayFK joined the channel
darkelda joined the channel
darkelda has quit
homeshlice has quit
niska has quit
niska joined the channel
dvestal joined the channel
quetzakubica joined the channel
adrian_lc joined the channel
kaakku has quit
domino14 joined the channel
darkelda joined the channel
Contigi has quit
aib_233 joined the channel
domino14 has quit
domino14 joined the channel
aib_233
Hi. Working on a conversion pipeline and “kind of successfully” got in touch with celery. Currently I have a major issue in my workflow so that blocking behaviour is introduced. Any comments on my attempts to use apply_async & link http://pastebin.com/MSgdHXX3 ? I’m trying to replace the line create_square.s(res.get(), id, pipeline).apply_async(link=g) with something more suitable. Also the line of code is currently relying on CELERY
gnoze5 joined the channel
rivarun
i have a GroupResult on which i call .revoke(terminate=False), but that only kills some of the tasks. is there a condition that i've missed?
zz_CyberJacob is now known as CyberJacob
*terminate=False. in fact i've also tried many different signals too. TERM, INT, QUIT, KILL and nothing seems to affect the task, even though it's not stuck. i do get an error if i send an invalid signal however, which means that the message does get through
*terminate=True.
iElectric has quit
aib_233 has quit
iElectric joined the channel
aib_233 joined the channel
aib_233 has quit
hm, nevermind, i think a library in the code that's running in the worker handles some signals. although i don't see how it would interact with SIGKILL