so if i were to try from within billiard/pool.py: Worker.run(): Finalize(self, self.on_loop_stop, exitpriority=1), what else do I need to do to install that as a handler? just assign it to self._finalize =[<here>] ?
i just tried again to be sure and it doesnt seem to be calling it twice
asksol
mdan: right, the object must still be alive at shutdown
mdan: (to be sure of that you could make it a global)
maznaz
where you goin' with that gun in your hand
oops, was scrolled up
mdan
yeah i just tried that, neither worked
i created on_stop that mocked a call to on_loop_stop and disabled my on_loop_stop call from finally: block
whether it was self._finalize or global _finalized defined within that run loop, my cleanup functions defined in celery didn't get called
i wonder if there's something i'm missing or if it's related to why atexit isn't getting called
maybe the signal is somehow going to the parent or something
anyway, it works with the finally: block :)
k_sze[work] has quit
asksol
Jo_: I just fixed a Python3 bug in billiard, so you should reinstall the latest master
Jo_: small errors merging multiprocessing changes from python3.3 back in. I haven't tested this in 3.2 yet
Jo_: seems to work on 3.2 as well, using redis as broker and result backend
mdan
so my work day is starting to get absurdly long, and i have a conference call for my other job soon, so I'm going to have to go get dinner in a minute
asksol
the unit tests are passing on 3.2 and 3.3, but that doesn't always help uncover errors
mdan
but thanks for your help
and i'll surely have more questions later :)
Jo_
ok seems to work now on 3.2 with rabbitmq will test now redis
maznaz
mdan, are you in the north of england?
mdan
nope, I'm in japan
pilgo joined the channel
i lived briefly in cambridge a number of years ago
maznaz
ah, just the timing of dinner threw me
being in a different timezone makes sense
Jo_
ok works with redis on 3.2 thanks a lot asksol, will look into your commit
mdan
right
maznaz
here in the north, they call the midday meal dinner
mdan
ah
pilt joined the channel
asksol, before I go, any idea why celery/worker/__init__.py might hang on blueprint.join() when using processes ? the comment in bootsteps.py for Blueprint.join says it should only be called for running green, and it's wrapped in a try, but instead of falling through to the except it just hangs for me. i see there's a timeout, should maybe we pass in a timeout if not self.pool_cls.is_green ?
not a big deal, im simply not calling it now, not sure if that will cause a problem
i'll send you my email, i dont have IRC on at home
asksol
mdan: is this gevent?
because I'm aware of this bug with gevent where it doesn't shut down
I have been debugging it, going crazy over it
for step in steps: step.stop(); return
every step.stop() in the list returns ok
but it does not exit the for loop for some reason
it's whacky
with eventlet it works fine
maznaz
can't seem to get a scheduled task working with a local django setup. I've got the worker running and the beat process running with the database backend configured. The task shows correctly in djcelery_periodictask but the last run field remains null
oh, ho hum
every five minutes, not five minutes every hour
oops
asksol
I'm unfamiliar with the term 'oh hum', is that another thing from the north? :)
maznaz
"ho hum" is very much like a sigh.
I think Treebeard used it in the lord of the rings books :)
Maior has left the channel
celhead joined the channel
italorossi joined the channel
italorossi has quit
italorossi joined the channel
vbabiy joined the channel
danfairs joined the channel
danfairs has quit
the_rat has quit
Jo_
can someopne tell me what a node is for celery?
is it a worker?
vbabiy has quit
vince_prignano has quit
hkj joined the channel
sgviking has quit
Ergo^ has quit
Ergo^ joined the channel
scitesy has quit
the_rat joined the channel
asksol
Jo_: yes a node is a 'celery worker' instance
so there's worker instances and there's pool processes
you can say either "node" or "worker instance", node is used when speaking of clusters
when using the multiprocessing pool the worker instance manages pool processes that executes the tasks
but you cannot communicate with pool processes, you communicate with the parent which is the worker instance
I have NameError: global name 'bytes_to_string' is not defined
asksol
weird
oh
or no, I cannot find any use of 'bytes_to_string'
Jo_
ok just checked master out of kombu
ok working now
thx
backjlack joined the channel
mikeybisbo joined the channel
pilgo has quit
Wertax has quit
danielk22 joined the channel
Vadtec joined the channel
Vadtec has quit
Vadtec joined the channel
VadtecWk joined the channel
mikeybisbo has quit
dlitvak_ has quit
dlitvak joined the channel
moodh
Hi, I still have problesm with .delay not publishing tasks to rabbitmq after a couple of minutes in both low and heavy traffic in a rather complicated setup (2 app > AWS ELB > 2 rabbitmq in cluster, and 2 workers > AWS ELB > 2 rabbitmq in cluster), I thought .delay would throw an exception when the connection fails but it seems it doesn't fail in a normal connection loss siutation. How should I go about debugging this? It's next to impossible to replicate the