I'm trying to figure out how to queue tasks in Celery, launched from Flask. I have the task happening via taskname.apply_async(args=args, queue=celery) but despite this it seems that when multiple tasks come in one soon after another, they both start, instead of the second task waiting for the first.
how do i force tasks to wait for the current task to complete first?
rickmak___ joined the channel
oh i see: app.config['CELERYD_CONCURRENCY'] = 1
dodobrain joined the channel
k_sze[work] has quit
enigma_raz joined the channel
dodobrain
hi all..
i've got celery tasks that never get out of the PENDING state. they complete with all the results properly
the tasks complete successfully, but yet are in a PENDING state. how do i debug/fix this? t = AsyncResult(task_id); t.status === 'PENDING'
malinoff joined the channel
rickmak joined the channel
sp1rs joined the channel
domino14_ joined the channel
sp1rs has quit
sp1rs joined the channel
sp1rs has quit
nicksloan has quit
sp1rs joined the channel
sp1rs has quit
tonytan4ever has quit
mihaj_ has quit
adambrenecki joined the channel
the_drow joined the channel
dodobrain has quit
tonytan4ever joined the channel
tonytan4ever has quit
sp1rs joined the channel
mihaj_ joined the channel
maryokhin joined the channel
ogny joined the channel
kennethreitz joined the channel
the_rat has quit
the_rat joined the channel
the_drow joined the channel
maryokhin has quit
VaticanCameos joined the channel
VaticanCameos
Is it normal for Celery to run the old version of a task's code even after service restart?
If the task has been loaded onto the queue by then that is
With the old version of the code
So a task is registered on the queue with the old code, and then I change the task's code and restart the Celery service. Should it run the older version or the newer one?
Whenever the task is executed that is.
malinoff
VaticanCameos: are you sure this task isn't executed _before_ you restart the service?
VaticanCameos
malinoff: The task was registered on the queue last Wednesday with the ETA of this Monday morning. The task's code was changed in the meanwhile and the service restarted twice.
sp1rs has quit
malinoff
VaticanCameos: stale .pyc files maybe?
bakirelived joined the channel
VaticanCameos
malinoff: If that really is the case then there's no way of preventing this issue from happening, short of knowing that this is the cause.
And if we delete these supposedly stale .pyc files and restart Celery, it should use the newer code yes? So it isn't Celery's behaviour that is suspect, it is Python's bytecode.
malinoff
yes, if the issue is with stale .pyc, it should help
hanynowsky joined the channel
sp1rs joined the channel
malinoff has quit
rickmak_ joined the channel
VaticanCameos
Turns out clearing the pyc didn't help, Celery is still sending emails according to the older code.
domino14_ joined the channel
tonytan4ever joined the channel
tonytan4ever has quit
VaticanCameos has quit
malinoff joined the channel
VaticanCameos joined the channel
malinoff: Clearing the pyc files didn't help, it still seems to be using the older version of the code.
Okay no, that's something else, gimme a few minutes.
Ergo joined the channel
sp1rs has quit
sp1rs joined the channel
darkelda joined the channel
darkelda joined the channel
sp1rs has quit
Asgaroth joined the channel
sp1rs joined the channel
tonytan4ever joined the channel
tonytan4ever has quit
sp1rs has quit
maf_ joined the channel
sp1rs joined the channel
VaticanCameos has quit
mekstrem has quit
nicksloan joined the channel
mekstrem joined the channel
sp1rs has quit
sp1rs joined the channel
fgd has quit
rickmak joined the channel
sp1rs has quit
sp1rs joined the channel
rickmak has quit
rickmak joined the channel
sp1rs has quit
rickmak joined the channel
malinoff has quit
elyezer has quit
tonytan4ever joined the channel
elyezer joined the channel
amcm joined the channel
sp1rs joined the channel
sp1rs has quit
lexileo joined the channel
sp1rs joined the channel
sp1rs has quit
sp1rs joined the channel
sp1rs has quit
sp1rs joined the channel
sp1rs has quit
sp1rs joined the channel
sp1rs has quit
sp1rs joined the channel
sp1rs has quit
sp1rs joined the channel
sp1rs has quit
sp1rs joined the channel
sp1rs has quit
sp1rs joined the channel
sp1rs has quit
sp1rs joined the channel
sp1rs has quit
DLSteve joined the channel
mangolisk joined the channel
the_drow joined the channel
sp1rs joined the channel
tonytan_brb joined the channel
joh joined the channel
tonytan4ever has quit
tonytan_brb is now known as tonytan4ever
joh
Hi! I'm using celery to distribute rather big workloads (in the order of half a million jobs) to complete over several days. However, simply submitting ~500k jobs to the queue at once seems like a bad idea as the broker (rabbitmq in my case) will need to store an excessive amounts of info in the queue. Some sort of rate limiting submissions seems to be
needed, so I wonder what is the recommended practice of doing so? Is this something that needs to be done on the application side, or are there mechanisms in celery to block submissions until the queue size has reached some acceptable size?