I'm assuming it is possible to define a shared task in a separate python file and import that into multiple workflows?
and use the @shared_task decorator
ah yeah it looks like it
sweet action
xtreak joined the channel
xtreak has quit
sunil_bansal joined the channel
sunil_bansal has quit
sunil_bansal joined the channel
sunil_bansal has quit
jdickers joined the channel
fission6 has quit
josuebc
badloop: Awesome :)
badloop
josuebc: yeah celery is checking all the boxes....loving it so far
subho_ joined the channel
pello joined the channel
lirum joined the channel
dega_ joined the channel
dega_ joined the channel
dega_ has quit
dega_ joined the channel
dega has quit
xtreak joined the channel
lirum
What is the purpose of GroupResult.completed_count()? The Canvas API guide says, "Return the number of completed subtasks.", but in a contrived simple example, that does not bear out: https://gist.github.com/Quidge/69055b4276bd3180...
josuebc
lirum: Do you see any errors on the worker? Do you have a result backend setup?
lirum
I'm using RPC and no errors on the worker. Both add tasks in that group complete successfully.
I appended some lines onto that Gist showing that as soon as I call r.get(), everything changes and goes to what I should expect. But ...I shouldn't have to call .get().
I believe this is something specific to RPC backend. I just tested the same code with redis as backend and rpc and I can see the difference.
lirum
Thank you! Relieving to know I wasn't misunderstanding the docs. I'll look into getting redis.
josuebc
lirum: I believe the issue is that with RPC the status is not updated unless you do a `.get()`. There's an issue for it but we haven't been able to get to it: https://github.com/celery/celery/issues/4084
subho_ has quit
d-fence_ joined the channel
lirum
@josuebc New to message brokers in general. As I understand it, RPC is a more limited backend capable of passing messages and meta information about those messages ('SUCCESS', 'FAILURE', etc), but is not designed to be a store for task results themselves. Do I have that correctly?
josuebc
lirum: Yes, it is focused on passing results as messages that way you can implement things with a more event driven approach. One of the limitations right now is that you'll have to do `.get()` in order to introspect a result
erani joined the channel
lirum
I'm using RabbitMQ as my broker. In my celery instance, if I switch `backend='rpc://' to `backend='amqp://`, what happens? The stuff I'm doing seems to work the same. Does this mean that celery runs detection on my system for the running RabbitMQ broker and connects to that?
The only thing is that the amqp result backend is being deprecated. So, there might be other bugs in it
lirum
josuebc: Got it. Thanks!
josuebc
lirum: Np. Btw, solving that issue is on the 4.3 milestone. So, keep an eye on it. We'll hopefully fix it soon
azerty1664 joined the channel
azerty1664
hi all, celery 4.1.1 here :-) Trying to run periodic task through rabbitmq. The task executes well however, task routing is completely ignored :-/ queues are set with celery -Q option. task_routes defined through app.conf.update and the function to execute is decorated with periodic_task() and called with .s().. any hint?
dega joined the channel
jarthur joined the channel
josuebc
azerty1664: How are you routing your tasks? Are setting a queue on the signature or something like that?
clokep
josuebc, lirum: I don't think that's 100% correct. I think you can call `.state` to get the current status without getting the result.
But the RPC backend definitely does some weird things since AMQP isn't really a place to just store information.
clokep: The thing is RPC. It works using redis or db backend but with RPC the value of state is 'PENDING' until you do `result.get()`. Hopefully it will get fixed for v4.3
clokep
josuebc: I disagree.
I'm using that in some code that never calls get().
josuebc
clokep: Ah, that's interesting. And you are getting the correct valu in state?
clokep
Yes.
josuebc
Which celery version?
clokep
It eventually proceeds from PENDING to SUCCESS or FAILURe.
4.1.0, but this code was also used with 4.0.x.
josuebc
clokep: Are you checking for the result's state on the same producer loop or are you getting the result like `res = AsyncResult(uuid)` ?
So the producer is listening for all results on tha queue, yes.
Although I'm surprised you can even try to have multiple things getting results from (the same queue) while using the RPC backend, that seems like a recipe for disaster. :)
josuebc
clokep: Yes, well the idea is to only allow the producer to retreive the result. But I believe you can specify a `reply_to` string and allow som other listener to retrieve the result.
clokep
Yep, you can do that sort of thing.
josuebc
clokep: Are you using only single tasks or also chains and/or groups ?
clokep
Single tasks.
Never really got the hang of chains and/or groups.
The design is overly complicated IMO.
And I don't need themf or my application.
josuebc
clokep: Ah, I think that's where the issue lies. It works for single tasks but not for groups.
clokep
Ah, I see.
josuebc
clokep: Yeap, I try to avoid them as much as possible. Simple is better than complex
clokep
Yeah. I could replace some custom code I have with them, but never got around to it. Oh well. :)
Maybe once celery 5 is out and the code-base is readable.
xtreak has quit
xtreak joined the channel
josuebc
clokep: Hahaha, yeah it can get tricky following the codebase