#celery

/

      • lavalamp joined the channel
      • lavalamp has quit
      • czart__ has quit
      • pxnky has quit
      • Asgaroth joined the channel
      • subho joined the channel
      • F2_ has quit
      • lavalamp joined the channel
      • hphuoc25 joined the channel
      • lavalamp has quit
      • subho has quit
      • hphuoc25 has quit
      • atomekk joined the channel
      • atomekk joined the channel
      • hphuoc25 joined the channel
      • hphuoc25 has quit
      • armyriad has quit
      • armyriad joined the channel
      • hphuoc25 joined the channel
      • xcm joined the channel
      • czart joined the channel
      • capella-da-star joined the channel
      • hphuoc25 joined the channel
      • capella-da-star has quit
      • hphuoc25 has quit
      • hphuoc25 joined the channel
      • hphuoc25 has quit
      • hphuoc25 joined the channel
      • cyp3d has quit
      • cyp3d joined the channel
      • hphuoc25 has quit
      • hphuoc25 joined the channel
      • hphuoc25 has quit
      • hphuoc25 joined the channel
      • jamessharpe joined the channel
      • jamessharpe
        Has anyone tried using celery against Amazons MQ service (https://aws.amazon.com/amazon-mq/)?
      • hphuoc25 has quit
      • hphuoc25 joined the channel
      • hphuoc25 has quit
      • hphuoc25 joined the channel
      • czart_ joined the channel
      • hphuoc25 has quit
      • hphuoc25_ joined the channel
      • hphuoc25_ has quit
      • m27 has left the channel
      • Dejan joined the channel
      • atomekk joined the channel
      • clokep joined the channel
      • F2_ joined the channel
      • clokep has quit
      • jamessharpe has quit
      • F2_ has quit
      • Bin4ry joined the channel
      • Bin4ry
        Anyone uses Flask with celery here? I'm trying to run a tasks from shell and I'm keep getting `Received unregistered task of type 'app.tasks.send_async_email'` where I'm pretty sure I did absolute import
      • jdickers joined the channel
      • clokep joined the channel
      • jarthur joined the channel
      • josuebc
        jamessharpe: Looks like it's using AMQP. If it uses that protocol then Kombu should be able to communicate with it and Celery should work.
      • Bin4ry: What about if you try `celery -A proj_name inspect registered` do you see that task there?
      • I think that message is more about Celery saying "I don't know about this task"
      • dlam joined the channel
      • dlam
        on reboot, celery fails to start because there isn't a /var/run/celery/ folder to put it's pid file in, anyone ever run into this? i can fix it by manually creating the folder but seems like i shouldnt have to do that!
      • Dejan
      • tmpfiles.d is your friend
      • dlam
        ooo ok thanks!
      • F2_ joined the channel
      • F2_ has quit
      • hphuoc25 joined the channel
      • hphuoc25 has quit
      • hphuoc25 joined the channel
      • Dejan has quit
      • Bin4ry
        joseubc: thanks! I decided to remove flask_celery_helper and initialise it by havin default flask_app argument, CELERY_IMPORTS helps.
      • atomekk joined the channel
      • atomekk joined the channel
      • josuebc
        Bin4ry: Awesome
      • Phrk_ joined the channel
      • Phrk_
        hello, is it possible to if task sucess then do this ?
      • josuebc
        Phrk_: You can check a task's status. There's also an task-succeeded event. Or you could create a chord and fire something after one task is successful
      • Phrk_: Might be easier with a chord. Here's an example: http://docs.celeryproject.org/en/latest/usergui...
      • Well, it's actually a chain, not a chord.
      • Phrk_
        task-scucceeded event seem cool
      • wow chain it's better
      • josuebc
        In my experience it's easier to use a chain since it's a bit convoluted to setup an event handler
      • Phrk_
        yeah seems cool
      • Phrk_ has quit
      • clokep has quit
      • hphuoc25 has quit
      • jarthur has quit
      • clokep joined the channel
      • jarthur joined the channel
      • jarthur has quit
      • jarthur joined the channel
      • jjasinski joined the channel
      • jjasinski
        Hi all, I'm having an issue where I've upgraded to celery 3.1.25 to 4.1. I have 4 workers each with a number of processes associated. When I start a worker, it runs a number of jobs and then just hangs after awhile. If I do a ps, I can see the processes out there (and a number of subprocesses, but there is no CPU and no activity in the celery output). I can't restart the celery workers - I need to kill -9 them or they
      • wont' go away.
      • if I do kill them (or restart the box), the workers process a number of jobs again, but then stall as before
      • I've tried setting -Ofair argument upon startup
      • but that didn't seem to do anything
      • Do you have any advice as to what I might check next or why the workers might be in a stalled state?
      • I've tried looking at the child-most celery process with strace
      • and I get.. one sec, let me grab the output
      • F2_ joined the channel
      • I get a loop that looks like this:
      • clock_gettime(CLOCK_MONOTONIC, {3581, 603571716}) = 0
      • select(0, NULL, NULL, NULL, {0, 500000}) = 0 (Timeout)
      • my job queue is in redis
      • it keeps growing and growin
      • So I know the jobs are beeing added to it
      • and I know I'm able to pull jobs off of it, because when I restart the machine, it starts processing some jobs
      • the same jobs were working fine in celery 3.1
      • Any thoughts as to what conditions lead to a job getting stuck like this?
      • *a worker getting stuck
      • Do I need to somehow close a connection after doing some work?
      • Do I need to somehow alter what is returned from the task?
      • maybe a different return value?
      • clokep has quit
      • clokep joined the channel
      • josuebc
        jjasinski: It seems like it's getting stuck while waiting for redis. I might be wrong, but have you tried doing a simple query to Redis when the workers hang?
      • jjasinski
        josuebc, thanks for that thought. I have tried connecting to redis from that worker machine
      • and was able to connect.
      • but there might be something else along those lines
      • josuebc
        jjasinski: I would try to play around with the redis settings: http://docs.celeryproject.org/en/latest/usergui...
      • Mainly the ones that limit connections and such
      • And timeouts
      • jjasinski
        interesting, thank you for sending that link. Looking
      • josuebc, do you think setting a CELERYD_TASK_TIME_LIMIT could help? At least it would kill the job rather than hang?
      • do you know if the task_time_limit and the task_soft_time_limit start counting when the job is scheduled, or when the job starts running?
      • F2_ has quit
      • josuebc
        jjasinski: When the task starts executing.
      • jjasinski
        Thank you for that clarification
      • josuebc
        But that's only going to try to kill the running tasks
      • jjasinski
        I'm wondering if the tasks aren't properly "hanging up" after they finish their work
      • josuebc
        jjasinski: If that's happening I believe you could fix it by setting the connections limit
      • jjasinski
        I was wondering about that setting. The default is set to no limit, which I thought would be ideal if I'm understanding it correctly.
      • I wouldn't want to crash jobs if all of the connections were utilized.
      • but I might be misunderstanding that setting
      • the docs mention a connection pool, so does it allow the worker to create any number of connections as it needs to redis, but pool them so only x number of actual connections are made
      • kinda like pgbouncer does for postgres
      • or does it mean that the worker resets after a given number of connections, like uwsgi harakiri mode?
      • jdickers has quit
      • jarthur has quit
      • czart__ joined the channel
      • czart_ has quit
      • pztrick joined the channel
      • dlam has quit
      • jjasinski has quit