the second task in my chain always goes to a wrong worker
can someone explain this? :<
the second task in the chain has a specific task route and a queue, but the task is sent to the general celery queue for some reason
celery 4.2.0
I tried changing order of the tasks, the first task is always works and is sent to the correct worker, as soon as you move it to the second one, it gets sent to the wrong worker
MADNESS
xtreak joined the channel
adw0a has quit
Ergo joined the channel
travis-ci joined the channel
travis-ci
celery/celery#5636 (master - 8dcc621 : Simon Charette): The build was broken.
Dejan: I haven't tried AWS MQ, but it should work without too many issues. Are you having some problems using it?
adw0a: Can you post your code?
Dejan
I am not, but I am thinking about proposing the switch to AWS MQ
we use Redis at the moment
josuebc
We change form Redis to RabbitMQ back in 3.X and it was an improvement. With Redis we saw some messages getting dropped and workers needed to be restarted once in a while. We don't see that anymore
We only use Redis as a result backend.
So, I would say it's a good idea to move to AMQP and amazon's shouldn't be that different from Rabbit. Except, maybe, in the setup
xtreak has quit
fission6 joined the channel
clokep has quit
clokep joined the channel
xtreak joined the channel
jarthur joined the channel
ccossentino joined the channel
lavalamp joined the channel
Dejan has quit
armyriad has quit
armyriad joined the channel
xtreak has quit
badloop
josuebc: ever run into a running celery app's connection to rabbitMQ just ceasing to work... is there some kind of keepalive directive or something?
josuebc
badloop: I've seen connections kept alive while the worker is no longer there. there is a heartbeat config, I believe. Let me see
Ah wait, but you mean the connection is lost from the producer. As in, from your main app?
badloop
roger that... looking into that as well as apparently there is a confirmation of publish
yep
I'm also seeing an issue where a message is delivered to the worker, and the worker executes the workflow, but appears to stop functioning before the first task in the workflow
restarting the worker sometimes helps
josuebc
So the broker never finishes the first task?
badloop
the debug log never shows that it even started the first task
i receives the request for the workflow, then says it succeeded... but nothing else
is there a higher level of debugging? ;-)
or lower level, as it were
josuebc
debug, which one you're using?
badloop
yep i'm already using debug logging
ah, hmmm
it seems that it may be after an exception has been raised
josuebc
Ah, you might be seeing celery saying that the workflow has been executed, not necessarily that the state of the workflow is success
badloop
this message: [2018-07-05 12:45:42,147: INFO/ForkPoolWorker-2] Task workflows.service_intent.service_intent_workflow[2a863fb9-62b3-48b4-8a7c-ab0c7a944ce6] succeeded in 0.007418700000926037s: <AsyncResult: 8012fcd6-6ac0-46bb-8d82-e19da8ea9109>
josuebc
Yeap, that means it executed the task and took that much time.
badloop
I'm removing the exception i was raising to see if that fixes things.... gotta get this working soon
there needs to be a celery conference or webinar or meetup or something...
:-)
conference is too much obviously.... but there oughta be like a users group meetup online
josuebc
That's an interesting idea.
Kamilion joined the channel
badloop
hmmmm
so i did debug in intellij
it appears if i call "workflow()" a few times, it finally does execute when calling it just one time doesn't always work
Catatronic joined the channel
travis-ci joined the channel
travis-ci
npilon/celery#9 (chord-error-results - 25062a0 : Nicholas Pilon): The build was fixed.