Anyone uses Flask with celery here? I'm trying to run a tasks from shell and I'm keep getting `Received unregistered task of type 'app.tasks.send_async_email'` where I'm pretty sure I did absolute import
jdickers joined the channel
clokep joined the channel
jarthur joined the channel
josuebc
jamessharpe: Looks like it's using AMQP. If it uses that protocol then Kombu should be able to communicate with it and Celery should work.
Bin4ry: What about if you try `celery -A proj_name inspect registered` do you see that task there?
I think that message is more about Celery saying "I don't know about this task"
dlam joined the channel
dlam
on reboot, celery fails to start because there isn't a /var/run/celery/ folder to put it's pid file in, anyone ever run into this? i can fix it by manually creating the folder but seems like i shouldnt have to do that!
joseubc: thanks! I decided to remove flask_celery_helper and initialise it by havin default flask_app argument, CELERY_IMPORTS helps.
atomekk joined the channel
atomekk joined the channel
josuebc
Bin4ry: Awesome
Phrk_ joined the channel
Phrk_
hello, is it possible to if task sucess then do this ?
josuebc
Phrk_: You can check a task's status. There's also an task-succeeded event. Or you could create a chord and fire something after one task is successful
In my experience it's easier to use a chain since it's a bit convoluted to setup an event handler
Phrk_
yeah seems cool
Phrk_ has quit
clokep has quit
hphuoc25 has quit
jarthur has quit
clokep joined the channel
jarthur joined the channel
jarthur has quit
jarthur joined the channel
jjasinski joined the channel
jjasinski
Hi all, I'm having an issue where I've upgraded to celery 3.1.25 to 4.1. I have 4 workers each with a number of processes associated. When I start a worker, it runs a number of jobs and then just hangs after awhile. If I do a ps, I can see the processes out there (and a number of subprocesses, but there is no CPU and no activity in the celery output). I can't restart the celery workers - I need to kill -9 them or they
wont' go away.
if I do kill them (or restart the box), the workers process a number of jobs again, but then stall as before
I've tried setting -Ofair argument upon startup
but that didn't seem to do anything
Do you have any advice as to what I might check next or why the workers might be in a stalled state?
I've tried looking at the child-most celery process with strace
and I know I'm able to pull jobs off of it, because when I restart the machine, it starts processing some jobs
the same jobs were working fine in celery 3.1
Any thoughts as to what conditions lead to a job getting stuck like this?
*a worker getting stuck
Do I need to somehow close a connection after doing some work?
Do I need to somehow alter what is returned from the task?
maybe a different return value?
clokep has quit
clokep joined the channel
josuebc
jjasinski: It seems like it's getting stuck while waiting for redis. I might be wrong, but have you tried doing a simple query to Redis when the workers hang?
jjasinski
josuebc, thanks for that thought. I have tried connecting to redis from that worker machine
and was able to connect.
but there might be something else along those lines
interesting, thank you for sending that link. Looking
josuebc, do you think setting a CELERYD_TASK_TIME_LIMIT could help? At least it would kill the job rather than hang?
do you know if the task_time_limit and the task_soft_time_limit start counting when the job is scheduled, or when the job starts running?
F2_ has quit
josuebc
jjasinski: When the task starts executing.
jjasinski
Thank you for that clarification
josuebc
But that's only going to try to kill the running tasks
jjasinski
I'm wondering if the tasks aren't properly "hanging up" after they finish their work
josuebc
jjasinski: If that's happening I believe you could fix it by setting the connections limit
jjasinski
I was wondering about that setting. The default is set to no limit, which I thought would be ideal if I'm understanding it correctly.
I wouldn't want to crash jobs if all of the connections were utilized.
but I might be misunderstanding that setting
the docs mention a connection pool, so does it allow the worker to create any number of connections as it needs to redis, but pool them so only x number of actual connections are made
kinda like pgbouncer does for postgres
or does it mean that the worker resets after a given number of connections, like uwsgi harakiri mode?