#celery

/

      • zaquest
        though when i look at workers' logs with pyamqp tasks still contains <memoery at ...> objects, so that might be the reason connections are dropped too
      • <memory at ...>*
      • fission6 joined the channel
      • Asgaroth joined the channel
      • niluje
        josuebc: how did you do?
      • fission6 has quit
      • josuebc
        niluje: Here's my handler task: https://pastebin.com/waEB3RYT
      • niluje
        !!!!!!!!
      • josuebc: you are my hero
      • thank you so much :-)
      • josuebc
        niluje: No problem. I'm going to dive a bit more into this. The handler should be called with the correct parameters
      • niluje
        josuebc: I have a chord, is there a way to know all the errors and not only the first one?
      • josuebc
        Sorry, that's not possible. Celery will call the `on_error` handler whenever an error raises and it will send the first error that happened in a chord.
      • Now, you could instead handle errors per-task. You can use `on_error` on any signature. So for every task you can do task.s().on_error(handler.s())
      • niluje
        ok, that's what I thought
      • :(
      • fission6 joined the channel
      • rubenwardy has left the channel
      • zaquest
        so apparently asynpool fails somehow, for some reason it doesn't lead to shutdown with pyamqp (maybe the exception is caught somewhere in python), but with librabbitmq a callback is called from C and it causes worker shutdown... the <memory at ...> part is what a worker process in a pool receives from worker's main process, i'm guessing it's not something valid because worker and main are in different address spaces... im not sure how it works for
      • anyone at all... i've tried with py3.6 and py2.7 - doesn't work. librabbitmq 2.0.0.
      • hmm, further investigation shows that the <memory > object is passed from librabbitmq to a callback and nobody every attempts to decode it. idk maybe at some point it returned bytes or something...
      • josuebc
        zaquest: It looks very odd. What is that you're doing again? I can't seem to reproduce it using rpc
      • zaquest
        josuebc, literally anything. even example tasks like add/sub do not work for me.
      • with librabbitmq
      • they somehow work with pyamqp, but i have no idea how :)
      • like logs still show <memory ...> so it should probably fail
      • josuebc, https://pastebin.com/XBD7Wt2v this is my tasks.py
      • then I run a worder celery -A tasks worker -l DEBUG -C --concurrency=1
      • worker*
      • go to repl: from tasks import *; add.delay(1,2)
      • and get a stack trace printed on the worker terminal
      • i install celery in a virtualenv as celery[librabbitmq]
      • (and i tried master too)
      • i found out why it happens, not sure how to fix it better in the code... and i wonder how it works for everyone but me...
      • josuebc
        zaquest: I would suggest to reinstall kombu and stick with pyamqp
      • zaquest
        josuebc, https://github.com/celery/celery/blob/master/ce... here's the problem source, for me somehow body is memoryview, but buffer_t is either buffer on py2 or some fake class on py3 https://github.com/celery/vine/blob/master/vine...
      • josuebc, i have other issues with pyamqp - it fails to keep connection :)
      • i found a workaround for now: -P solo
      • luckily I don't need any concurrency within one worker, not for now at least
      • josuebc
        zaquest: Can you create a github issue and mention me? @xirdneh. I would like to keep a tab on this issue
      • zaquest
        ok
      • josuebc
        Thanks :)
      • clokep joined the channel
      • jdickers has quit
      • fission6 has quit