#celery

/

      • fission6 joined the channel
      • jgadelange has quit
      • jgadelange joined the channel
      • dashdanw has left the channel
      • atomekk joined the channel
      • Asgaroth joined the channel
      • bubiche joined the channel
      • bubiche
        hi, I have just finished setting up celery with django and rabbitmq following the docs. How can I monitor my tasks? I want to see things like how long a task is stuck in the queue before the worker consumes it
      • it seems Flower is the recommended monitor is there any simpler solution if I just want that specific information?
      • bubiche has quit
      • lexileo has quit
      • lexileo_ joined the channel
      • bubiche joined the channel
      • my email is npnguyen@apcs.vn hope someone will help! Thanks!
      • bubiche has quit
      • chibby0ne joined the channel
      • clokep joined the channel
      • fission6 joined the channel
      • fission6 joined the channel
      • czart joined the channel
      • Dejan joined the channel
      • jdickers joined the channel
      • atomekk has quit
      • fission6 has quit
      • fission6 joined the channel
      • chibby0ne has quit
      • shiriru joined the channel
      • fission6 joined the channel
      • shiriru has quit
      • fission6 joined the channel
      • Dejan has quit
      • ams__ joined the channel
      • gpmidi joined the channel
      • gpmidi
        Is there a way to replace arguments passed to a task? For instance could I write a base task task that automatically replaces all keyword arguments with name xyz with an object (xyz => someclass(xyz))
      • clokep
        You could probably do that with a function decorator.
      • And not worry about the aspect of it being a task.
      • Unless I'm misunderstanding what you'rea sking. :)
      • gpmidi
        clokep: I could use a decorator but I've already got a base task for other reasons. I'd be easier if it was included there or in the Request.
      • I suppose I could make one that includes the shared_task decorator in it
      • that'd be easier
      • hmm
      • dashdanw joined the channel
      • dashdanw
        hey all
      • I was wondering if anyone could tell me why it seems you can chain chords but not groups?
      • kevr has quit
      • kevr joined the channel
      • josuebc
        dashdanw: A chord is just a chain with a group and another task. chord([header_task.s()])(callback_tas.s()) == (group(header_task.s()) | callback_task.s()).delay()
      • mcspud joined the channel
      • Ergo joined the channel
      • kevr joined the channel
      • dashdanw
        oh gotcha so i can I specify a group as a callback?
      • josuebc:
      • could i do like chord([task.s() for task in .. ])(group())
      • josuebc
        You could. I haven't tried it using the chord primitive but if it doesn't work you can chain them (group(task.s() for task in ...) | group()).delay() and it's the same thing
      • dashdanw
        does that block the context in whch it's run though?
      • because i need the caller of the group to exit pretty gast
      • fast&
      • josuebc
        No, it should work the same way. But try it first with the chord primitive if you like. Shouldn't matter
      • lexileo_ has quit
      • lexileo joined the channel
      • czart_ joined the channel
      • erani joined the channel
      • dashdanw
        do retries block other tasks from queueing?
      • so if a task fails does the retry get to sit at the front of the queue like a sleep()?
      • josuebc
        dashdanw: Not by default, but you can modify how a retry works. By default any task will retry with a countdown of 3 seconds
      • dashdanw
        i have some polling to an external API and it can take up to 30 mins to respond, so I wanted to retry in a way that didn't hog threads
      • because i had a blocking task for it before and at least with prefork it was completely occupying the queue
      • josuebc
        You can use some exponential backoff: http://docs.celeryproject.org/en/latest/usergui...
      • dashdanw
        i don't want to back off because I know it will take a while but I don't mind pinging it every 30 seconds or so for the half hour, I just want to make sure that when it runs retry() it gives another task a chance to run
      • i was thinking of just breaking out a polling queue with an absurd number of concurrent threads but it's a bit elaborate of a solution for how much we're using the task at the moment
      • josuebc
        Yes it will. Workers listen on queues which are First-In-First-Out so a retry is a message that gets re-queued.
      • dashdanw
        oh sorry i forgot to mention
      • i have defauly retry delay set
      • that doesn't matter i guess
      • also cool thanks, I should have just asked if it gets requeued or if it stays in the front lol
      • josuebc
        That's ok. The worker will still execute other tasks whie the retry waits
      • dashdanw
        awesome
      • josuebc
        Awesome, indeed :)
      • dashdanw
        yes celery is great
      • and again thanks for the help, it's a bit hard for me to thumb through the wiki sometimes
      • josuebc
        dashdanw: That's ok. That's what we're here for :)
      • dashdanw
      • does that make sense josuebc ?
      • josuebc
        dashdanw: It might confuse users. In the documentation it states when a task is retried a new message is sent. Maybe it'll make more sense to add there something like "which gets to the end of the queue"
      • "When you call retry it’ll send a new message, using the same task-id, and it’ll take care to make sure the message is delivered to the same queue as the originating task." < from the docs
      • jdickers has quit
      • fission6 has quit
      • clokep joined the channel
      • fission6 joined the channel