Zgrokl: Last I heard it's not fully tested in production. Are you using Systemd, init, or something else?
jdickers joined the channel
crooksey: I'm wondering if every part of your application is using the same timezone. When you use BST tasks, at least, do get run, right?
crooksey
I only define a timezone in my celery config
And if I set a timezone to Europe/London everything runs, all be it an hour behind
josuebc: ^
czart__ has quit
josuebc
crooksey: What about your workers? Where are those running, in the same box?
domino14_ joined the channel
crooksey
Yea my workers run from a freebsd jail
Which has the correct time
So that reports 15:00 and my cronttab is set to run at 15:00, but executes at 16:00
josuebc
Ok, so the workers' system is also runnig on BST
crooksey
Yea, workers system is BST
Which is Europe/London
subho has quit
If I set Europe/London it runs +1 hour, if I set nothing, it runs +1 hour
I could just set a timezone that is the same as London, but 1hr behind, but don't see that as a correct solution
Like norway or something
Zgrokl
josuebc, systemd of course
what i need to do to deamon celery in production ?
there is no doc
(with systemd)
fission6 has quit
josuebc
crooksey: Can you print the naive time and timezoned time on the task when it executes? `datetime.now()`, `datetime.now(timezone.utc).astimezone()` and `datetime.now(pytz.timezone(app.conf.CELERY_TIMEZONE))`
That should print the naive datetime object, the datetime object with the current system time zone then with the celery timezone.
Oh, add `isoformat()` to the end of each one so it prints that string.
Zgrokl: Yes, there's not that many docs on systemd. I've never been able to use `celery multi` to work with systemd. Although there's been some folks here that they've been able to make it work.
I think it's because `celery multi` forks processes and systemd doesn't like that.
Zgrokl: You can always use multiple units to run `celery -A proj worker` and then amange those with another unit
In theory `celery multi` it's the same as running `celery -A proj worker` multiple times.
Zgrokl
the doc just say use multi and not specify anything about the normal mode
oh ok thx
well I can add multiple .service for each worker
josuebc
Or, if grouping things in systemd complicates things too much. You can also use a bash script to run `celery multi` or `celery -A proj worker` multiple times and call that on the ExecStart in your unit
Zgrokl: That's how I've done it. And then group them with a single unit and use that one to manage them.
To me, personally, it's a more explicit syntax.
fission6 joined the channel
domino14_ has quit
Zgrokl
josuebc, nice idea, but how do you kill them to systemctl stop ?
with a Execstop=stopscript ?
josuebc
You mean using a custom script to run them? Yea, you pretty much have to implement how to stop them and restart them with some more custom script
Hence why I prefer grouping units in one service (with each sub-unit running one worker)
For stopping them with a custom script you can do something like `ps aux | grep "celery -A proj worker" | awk '{print $2}' | xargs kill SIGTERM
domino14_ joined the channel
Zgrokl
and just to understand how celery work, i need to start a worker, then a beat (if I want to use beat of course)
it's 2 separate process
crooksey
@josuebc print(dt.datetime.now(timezone.utc).astimezone().isoformat()) <--- function timezone has no object utc
domino14_ has quit
domino14_ joined the channel
josuebc
crooksey: Python 2.7?
crooksey
3.5
josuebc
hhhmm... wait, let me look for the correct syntax
Zgrokl: Yeap, a worker beat and, if you need, a regular worker
crooksey: Try `datetime.utcnow().isoformat()`
crooksey
josuebc: thanks, how can I get print messages to log on my beat output?
josuebc
crooksey: You're gonna have to use `logging.getLogger(__name__).debug('some debug')` and run the worker with `-l debug`
Or use `.info('something')` and run it with `-l info`
subho joined the channel
jarthur joined the channel
mikeliss joined the channel
crooksey: Btw, try to also print `self.request.eta`. You might need to bind your task if it's not bound already. Binding a task means celery will pass the Task instance as the first argument
crooksey
ok thanks will investigate
Diemuzi joined the channel
at the moment I am launching tasks through a third party (pyramid) celery plugin
Will probably just migrate to vanilla celery, no big deal
But I can't see what that has to do with this error
josuebc
crooksey: AFAIK here's what's up. Workers use the datetime in the message to fire up tasks. If UTC=True then the workers assumes this datetime is in UTC if not then it depends if the datetime passed is naive or not (a.k.a. timezone aware or not)
The datetime passed into the message is a string using `isoformat()`
crooksey
ok cool, I had come to the same conclusion
Thanks for help/confirmation
josuebc
By printing those values you can check if all the timezones are correct from the worker point of view.
You'll print the current time using whatever the system timezone is, then using whatever celery has in the configuration as timezone, then the actual eta that the message had (Task.request.eta)
The utc one is just in case there's some discrepancy, maybe the worker is using the utc timezone and not anything else.
I mean, that's just part of the problem. But at least one of those values should match to the actual time the task gets executed. Then you can see if it's an issue with the celery config, worker config or box config.
Timezones are a pain
domino14_ has quit
domino14_ joined the channel
crooksey
Thanks and yes they are!
josuebc
crooksey: They are? They're all correct?
ah, timezones. Hahaha, yes
crooksey has quit
Zgrokl
if i launch my beat, I get an infinite loop that launch my task every 0.1 seconde
how can this is possible ?
josuebc
Zgrokl: Depends on your config, how are you configuring your celery beat
?
Zgrokl
like this josuebc : celery -A proj beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler
on my django settings the task is set to intervals 30min
also if I only use celery beat with the normal settings.py django cron style
there is a loop
domino14_ has quit
josuebc
Zgrokl: what do you mean a loop? How are you arriving to that conclusion?
I mean, because celery beat does exeutes things every so often
executes*
Can you show your beat config?
Druid05 joined the channel
Zgrokl
josuebc, it's say INFO/MainProcess] Scheduler: Sending due task mytask
every 0.1 seconde
like it's end of the world
domino14 joined the channel
and i have more than 3000 failed task in the database