hi guys, any idea how can i link on another queue that the send_task
ahadi
Does `autoretry_for` only work for function based tasks? It looks like it doesn't apply to class based tasks.
Phrk_ has quit
la_croix joined the channel
la_croix has quit
la_croix joined the channel
hphuoc25 joined the channel
hphuoc25 has quit
meisth0th joined the channel
meisth0th joined the channel
hphuoc25 joined the channel
inovas_ has quit
inovas_ joined the channel
sunil_bansal has quit
sunil_b joined the channel
sunil_b has quit
sunil_bansal joined the channel
DLSteve joined the channel
josuebc
xcm: Have you tried getting the task traceback? Initialize an AsyncResult with the task id and try to access AsyncResult.traceback
ahadi: Usually concurrency is set at the worker level not so much at the task level. What exactly are you trying to do?
ahadi: autoretry_for should also work for function based tasks. You have to give it the correct tuple of exceptions
ahadi
josuebc: I have some tasks talking to a specific api and the concurrent connections to this API are limited. Thats why I wanted to regulate how many tasks can run concurrently. My solution for now is to have a separate queue with one specific worker that has the right amount of concurrency set.
josuebc: I know that it works for function based tasks, but it doesn't for class based tasks
ahadi: Oh, so it's not working for class based views. Is it possible to take a look at the code?
ahadi
josuebc: I already found the problem. It looks like that the PR that would bring the autoretry_for for class based views was merged after the release of 4.1.0
It should be fixed for 4.2.0
But thank you anyways!
marself has quit
marself joined the channel
marself has quit
josuebc
ahadi: Ah, good you found the problem.
hphuoc25 has quit
sunil_bansal has quit
atomekk joined the channel
subho joined the channel
la_croix has quit
la_croix joined the channel
xcm
josuebc: first of all, thanks so much for reminding me about this issue because i didn't even put it in my bugtracker. i didn't know about traceback, so i'll try it next time this issue crops up
la_croix has quit
atomekk has quit
la_croix joined the channel
marself joined the channel
kanzure
ppp/win 90
ams__ joined the channel
josuebc
xcm: Sure thing, hopefully you can get it resolved
inovas_ has quit
inovas_ joined the channel
la_croix has quit
la_croix joined the channel
la_croix has quit
la_croix joined the channel
la_croix has quit
la_croix joined the channel
la_croix has quit
la_croix joined the channel
la_croix has quit
la_croix joined the channel
atomekk joined the channel
andi- joined the channel
la_croix has quit
atomekk has quit
courrier joined the channel
courrier has quit
courrier joined the channel
courrier
How How can I ensure a revoke(id, terminate=True) call actually closed the task?
If some task fails to close within reasonable time (whatever "reasonable" is), I'm expecting celery to kill it
it looks like it does not do the job. From time to time, the revoke() call is blocking, probably because the task never returns, and that paralyses my task manager permanently
With Py3's multiprocessing library, I sent SIGINT, then join()ed the process with some timeout, and finally send SIGKILL it needed ; the same thing with Py2's multiprocessing can be done via a thread monitoring the termination of the process and sending SIGKILL if necessary, however I was expecting from a library like celery to do that work for me. How can it help handle these cases?