Thanks! I'm setting up another ubuntu server, and using supervisor to run celery beat and celery worker.
maular joined the channel
This is for a django app.
maular has quit
Running the celery commands manually works fine, running them through supervisor doesn't. But my config is taken directly from another one of my servers where it works fine.
maular joined the channel
ionelmc
well ok, and what do you have in the logs ?
maular has quit
maular joined the channel
maular has quit
Nizumzen joined the channel
maular joined the channel
ustunozgur has quit
maular has quit
karanlyons sighs.
karanlyons
And now it's working fine.
maular joined the channel
maular has quit
ustunozgur joined the channel
ionelmc: I have no idea why it's working now, but thanks!
maular joined the channel
maular has quit
maular joined the channel
maular has quit
maular joined the channel
jon__ has quit
maular has quit
maular joined the channel
maular has quit
maular joined the channel
maular has quit
ustunozgur has quit
maular joined the channel
maular has quit
maular joined the channel
maular has quit
maular joined the channel
maular has quit
maular joined the channel
maular has quit
andrewwatts has quit
maular joined the channel
maular has quit
maular joined the channel
jon__ joined the channel
maular has quit
bwreilly joined the channel
maular joined the channel
bwreilly has quit
maular has quit
jon__
ionelmc: maybe i wasn't specific enough, but that link only covers updating a task while its running. once the task finishes it will be given a state in states.py such as success or failure
maular joined the channel
maular has quit
bwreilly joined the channel
maular joined the channel
maular has quit
ionelmc
you can manually call the result backend yourself if you want
I do some performance modeling based on queueing theory
loic84 joined the channel
asksol
nice, like formally?
bmbouter
yeah for my PhD research @ NCSU
one problem we run into a lot is that closed form queueing models are based on Poisson traffic assumptions
and in practice that is not always true
loic84 has quit
We talked a week or two ago about me implementing a new AMQP 1.0 Kombu transport that will use Proton to use AMQP 1.0 and QMF for broker management
I notice that the virtual transports that support blocking (ie: non-polling) reads from their brokers (zmq and redis) both use a MultiChannelPoller
but that the amqp based transports that also use blocking reads (ie: non-polling) don't use MultiChannelPoller
I suspect my amqp 1.0 transport should also use MultiChannelPoller
any thoughts on that?
asksol
right, it would need to I guess but probably better to just connect it to the kombu.async event loop
the multichannelpoller in redis is used in the blocking consume api
but is also reused by the event loop
domino14 has quit
domino14 joined the channel
it should be possible to reuse this implemetation
optimally I don't want to have one for each transport
bmbouter
yeah I was considering both paths
I was thinking about registering my channels directly to the kombu.async event loop
asksol
the redis transport is blocking, but it uses select to know when to read
I think that is a better approach
bmbouter
I do too
asksol
brb
bmbouter
it's not clear to me when channels are created and destroyed, and which objects are responsible for that. If I'm going to register them, I need to manage their file descriptors too