mikeliss: Yeah. I don't really see that as a problem, if www-data is the user running your django (is it django?) app.
asksol
mikeliss: you are describing problems in an older version though... api changes were in 3.0, you're not upgrading to 3.0 afaik?
jcdyer
I think the problem would be if www-data had access to other web resources, in which case the problem is the same with your django app running as www-data.
mikeliss
that's good. I've been pondering it and not been able to come to any good reasons why a celery user was needed. The reason I set one up (back in the day) was because it was part of the install, or suggested in the init script or something.
Um, we're on 3.0.13, but we do intend to upgrade...just haven't had the badnwidth yet.
asksol
there is no standard unprivileged user, almost every service will tell you to use a separate user, in the end that's up to how you want to deploy it
or up to the package maintainer for a distro
mikeliss
Yeah, that's a good point.
asksol
using a separate user is anyway good unless you really need to share data
Nizumzen joined the channel
mikeliss
True enough. I guess it is especially true when celery is available from other servers on the network. We run Celery locally, so that doesn't make as much sense for us.
asksol
any daemon must close the stdout and stderr file descriptors to detach, it will attempt to check the logfile and pidfiles (at least in 3.1), but other than that if there is an error after closing the file decriptors and before the logfiles are opened there's not way to output what has gone wrong
mikeliss
Ah, I see. So in 3.1 these kinds of errors *do* go to the logs?
I think that's largely the fix I'd want to see.
ryanhiebert has quit
At least, it'd greatly improve the situation.
asksol
there is no celery.sh, so I have no idea where you have that form
acangiani has quit
mikeliss
Oye.
ryanhiebert joined the channel
asksol
in 3.1 we don't use management commands anymore
mikeliss
That's funny. I don't imagine we set that up ourselves. Must have come from some guide somewhere.
Nice! That's great too.
JohnPreston72
Hi guys. I am quite of struggling with a get() inside a task. Cant get subtask working :(
Has anyone a good example (a bit longer than a add function) ?
asksol
JohnPreston72: result.get() inside a task is not allowed
altipard has quit
JohnPreston72
asksol: so what is the good way to go ?
use-case is : I have a worker which is going to have a local list of actions to do. This list must be stored -> the task must not be lost
so basically I have a schedule task which goes across this local list and run the action of the list, sending the task to another worker
(the local worker's role is to keep track of the different tasks executed and run them again and again to update fields)
asksol: I have been through this all day yesterday but I dont understand how this works in a more than 2 lines examples
asksol
I guess we can remove most of worker command-line help text,
altipard joined the channel
monokrome joined the channel
JohnPreston72
asksol: so there is no way to have a task running within a task ?
I mean
have a task, calling another one and get() the result
cah190 has quit
asksol
you can do it if you use eventlet/gevent, but for prefork you would have to start a new process for every such task received to avoid deadlocks
JohnPreston72 googles eventlet
cah190 joined the channel
brockhaywood has quit
JohnPreston72
asksol: otherwise, I dont get how you declare than a task is a subtask of a parent one
al1o has quit
asksol
subtask means started by a task. if you call a task from a task, then it's a subtask
JohnPreston72 puzzled : thought task within task was denied
brockhaywood joined the channel
no, blocking waiting for a task to return is not allowed
you can start as many subtasks as you want, as long as you don't wait for them
think what happens if you have a single worker with -c1, and call task.delay().get()
rci-pewpew joined the channel
JohnPreston72
asksol: sure I get that but what if it doesnt matter ?
This worker is a standalone one. Having it "dead" wont change anything
thing is that I have achieve 2 things at the same time : a while() loop ( always true as I need to continuously scan data) and keep receieving data via the queue
problem is that in a single thread the while() loop will block the reception process
ibeex_ has quit
ibeex joined the channel
Oh, I may have found a way to achieve what I want differently, but without a queue it is ... dirty
is there really no way to run a task waiting for another ?
asksol
no, you have to do it differently, chain, chord or call the function directly (not as a task)
JohnPreston72
asksol: so do you know another way in python to run a task periodically within a celery worker ?
another than using CELERYBEAT_SCHEDULE
as the task you give as argument to it must be a celery task
cant be a function :(
asksol
mikeliss: btw, I wouldn't call performing a log statement at module level a little change, you never know what trouble that can cause, seems you were lucky to me :)
mikeliss
asksol: I think it was actually just the log statement import that did it.
asksol
you can make a task out of any function using the task decorator, so not sure I see the difference
mikeliss: calling logger.log will acquire a lock, I think it can deadlock the interpreter in some cases
asksol: I've got that in other places in the code -- no problems! -- so I didn't think twice about putting it in a task.py file.
Debnet has quit
asksol
JohnPreston72: almost any problem can be parallized without using synchronization, you just have to be creative. If it really requires synchronization you have chords
paralellized*, guess that has not made it into the dictionary yet
JohnPreston72
Well I dont need any sync here, a result can come before another, I dont care. But I have to periodically run a function within a worker
cant find a decent way to do it :(
asksol
if you wait for a result you are synchronizing
Debnet joined the channel
JohnPreston72
Oh ok, I see then what you mean
asksol
and when you say you have a while 1 loop, do you mean the task is always running?
JohnPreston72
yes, kind of
ibeex has quit
so, when the worker is initialized
ibeex joined the channel
it has an internal list of actions to do which is empty
harlowja has quit
then with time it will receive tasks to execute, those tasks are stacked "locally"
jbrhbr has quit
I need a function which will every X take the list and send the tasks to another worker