Hi! How can I troubleshoot rate_limit configuration? Currently tasks are executed in 2.5-3 times often than I configured.
Debnet joined the channel
nickabbey has quit
Audioburn joined the channel
cjhubert joined the channel
gnoze5 joined the channel
rpkilby joined the channel
acangiani has quit
rpkilby has quit
nickabbey joined the channel
amcm joined the channel
zz_gondoi is now known as gondoi
malinoff has quit
rpkilby joined the channel
domino14 joined the channel
domino14 has quit
dxist has quit
the_rat_ joined the channel
ponpanderer joined the channel
acangiani joined the channel
ponpanderer
Hello! For a long running async job not being executed by a worker and only polled by one what is the best practice to run a polling job in a worker task? obviously keeping the worker busy with a for loop+sleep isn't the way to do it. Would it be to submit a new check task from within the worker with an ETA/Countdown?
the_rat_ has quit
e.g. - if not job_finished: some_task.apply_async(..., eta=90) in the worker calling itself recursively with a delay
(and obviously something to expire eventually as well)
frodopwns joined the channel
drake01 has quit
ultimateboy joined the channel
dxist joined the channel
rpkilby has quit
amcm
ponpanderer the "long running async job" is not a Celery task, correct?
dxist has quit
rickmak has quit
ponpanderer
@amcm: correct. basically it's an async call, but to an external rest API. the call could take about 5-10 minutes to complete. i don't want to tie up a worker with a while loop. right now I create the job with an eta=300, then poll every 30 seconds in a loop which is messy imo. i'd rather do a quick rest api call to the target service and if not done exit the task but submit a new one eta=60
or something
rpkilby joined the channel
this will free up a worker to do other stuff and i have no requirement for exact timing, best effort/eta is fine
(to better explain. the external rest API returns it's own job id and I am polling that job id in the celery task)
domino14 joined the channel
amcm
I'd have a single task, and raise a Retry Exception when the remote API isn't ready. Then continue to the processing when it is. (or launch a new task if that makes more sense)
ah this is exactly what the type of builtin functionality i assumed i missed in the docs :)
the custom retry delay is perfect. would it reset an expiry time for the job each retry or if i started with eta=300/expiry=600 and then retry delay of 30 would the job still expire in 600?
or is max retries preferred to handled that?
cliles joined the channel
Qarion has quit
Elsmorian has quit
nickabbey has quit
Elsmorian joined the channel
the_rat has quit
Elsmoria_ joined the channel
Elsmorian has quit
amcm
Hmm... I don't know how expiry plays with retry.. I /think/ the expiry time would /not/ get reset when you retry. But I might opt for the max_retries limit instead
domino14 joined the channel
domino14 has quit
the_rat joined the channel
nickabbey joined the channel
the_rat has quit
darkelda has quit
adrian_lc has quit
Elsmorian joined the channel
Elsmoria_ has quit
malinoff joined the channel
Elsmorian has quit
domino14 joined the channel
domino14 has quit
Nerevar joined the channel
Nerevar has quit
Debnet has quit
shiriru joined the channel
nickabbey has quit
nickabbey joined the channel
domino14 joined the channel
nickabbey joined the channel
the_rat joined the channel
gnoze5 has quit
the_rat has quit
shaked joined the channel
jordanm joined the channel
the_rat joined the channel
jordanm
I have an instance of flower and making the api call /api/tasks throws a 500. Trying to determine what's causing it, I have tried various logging options such as --logging=debug --log_to_stderr and --log_file_prefix, but I am unable to get flower to produce any logs at all. Is there something I might be missing?
stracing the process only shows calls to gettimeofday() between reading the http request and writing the 500 response
cliles joined the channel
transit joined the channel
nickabbey has quit
nickabbey joined the channel
hanine has quit
hanine joined the channel
acangiani has quit
ultimateboy has quit
frodopwns has quit
ultimateboy joined the channel
frodopwns joined the channel
ultimateboy has quit
frodopwns has quit
frodopwns joined the channel
ponpanderer has quit
domino14 joined the channel
frodopwns has quit
domino14 has quit
frodopwns joined the channel
shiriru has quit
transit has quit
Elsmorian joined the channel
ultimateboy joined the channel
Xard has quit
transit joined the channel
Elsmorian has quit
shaked has quit
transit has quit
transit joined the channel
domino14 joined the channel
nickabbey has quit
domino14 has quit
amcm has quit
gnoze5 joined the channel
acangiani joined the channel
transit has quit
domino14 joined the channel
domino14 has quit
transit joined the channel
the_rat has quit
rpkilby has quit
transit has quit
travis-ci joined the channel
travis-ci
celery/celery#2803 (master - 5efd77f : Ask Solem): The build was fixed.