Hello people! I'm looking for way to close result queues in rabbitmq after I fetched the result. Can't find a way to do that through Celery, any idea? thanks
Is it a limitation from the way celery interfaces with rabbitmq...?
bed777 has quit
kaakku has quit
domino14 joined the channel
domino14 has quit
maryokhin1 has quit
maryokhin joined the channel
sokr joined the channel
acangiani joined the channel
ibeex joined the channel
maryokhin has quit
maryokhin joined the channel
maryokhin has left the channel
kaakku joined the channel
rmariano joined the channel
domino14 joined the channel
domino14 has quit
steeve has quit
erve has quit
erve joined the channel
jdiez joined the channel
sokr has quit
ryanhiebert joined the channel
erve joined the channel
ceejh joined the channel
jdiez_ joined the channel
gchristensen
Hi, I have a moderately sized celery deployment -- 7 different repositories talking to each other over celery + rabbit. one problem I've found is if we don't deploy in just the rightorder, the queues might not be configured correctly, and then messages sent from one server to another might get lost. has anyone found a solution to this?
steeve joined the channel
bkuberek joined the channel
italorossi has quit
italorossi joined the channel
italorossi has quit
ElysiumNet joined the channel
machbio joined the channel
maryokhin joined the channel
maryokhin has quit
NBhosting joined the channel
maryokhin joined the channel
maryokhin has left the channel
brianl joined the channel
domino14 joined the channel
domino14 has quit
ibeex has quit
italorossi joined the channel
emperorcezar joined the channel
lajarre has quit
thedodd joined the channel
kaakku has quit
hack72 has quit
hack72_ joined the channel
altipard_ joined the channel
Popsikle joined the channel
Kronuz joined the channel
emperorcezar has quit
domino14 joined the channel
altipard_ has quit
domino14 has quit
domino14 joined the channel
domino14_ joined the channel
domino14 has quit
erve
is there a "standard" way to pool postgres connections between workers?
Kronuz joined the channel
adrian_lc has quit
brockhaywood joined the channel
SoftwareMaven joined the channel
domino14_ has quit
maryokhin joined the channel
maryokhin has left the channel
Nivix joined the channel
frasergraham joined the channel
evg1 joined the channel
evg1
Hey guys, what are some good resources on pool implementations (prefork, processes, eventlet, etc) and use cases? My email: me@evgg.cc
Is there a way to delete the finished tasks after a max number has been reached?
Kronuz joined the channel
malinoff
Nivix, max number of what?
Nivix
For example if a worker executes a task every minute, after <MAX_TASK_RESULTS> number has been reached, then every minute thereafter the worker will delete the oldest task result and save the newest result?
I tried setting the app.conf.CELERY_MAX_CACHED_RESULTS to 100, but in the Flower GUI one worker still saves 100+ task results
malinoff
Nivix, which backend you use?
bed777 joined the channel
Nivix
malinoff, I use RabbitMQ
Each worker has a direct queue
malinoff
Nivix, amqp or rpc?
Nivix
malinoff, amqp
Kronuz joined the channel
malinoff
Nivix, so they're stored in rabbitmq and CELERY_MAX_CACHED_RESULTS is not related
do you actually retrieve the results?
e.g. do you call res.get() somewhere?
Nivix
Oh..I thought the CELERY_MAX_CACHED_RESULTS was somehow related to the result storage.
malinoff
It just caches results on the caller side
Nivix
I do not call res.get() in my code, thought Flower probably does. (since it shows the result)
though*
malinoff
no, it definitely shouldn't, because the result won't be discoverable
so it just pops it from the queue without acking
note that i don't really know about flower, i just *think* that it should work as i described
Nivix
Well, one of my tasks is a GET request to a webpage. The task parses the page and returns the parsed information.
I can see the results in Flower, so I am assuming it retrieves the results.
malinoff
yes, after that celery puts the message in a result queue
Nivix
Ah.
malinoff
flower just inspects the queue
Nivix
Gotcha. So I guess I should look into RabbitMQ queue length limits?
OK, new question. RabbitMQ shows 340 queues (and increasing) but I only have 5 workers running each listening to a single queue. Why are queues being generated?
ubbersith joined the channel
ubbersith
hi guys
malinoff
Nivix, a new queue is generated for each task request
ubbersith
anyone here knows how use celery and beat in diff instances?
I'm using heroku...and wanna use one instance per process (celery and beat)
malinoff
Nivix, this is how amqp backend works, it simply tries to work as result storage. You can switch to rpc backend if you don't need to query results from different processes
Nivix
I might need to change..generating a new queue for every task will become resource intensive quickly
gchristensen
Nivix: I believe the queues go away. do you need results from your celery tasks, or are you doing fire-and-forget messages?
al1o joined the channel
Nivix
gchristensen, They are not necessary, though I would prefer to keep them (for debug purposes). How would I setup a "fire-and-forget" system?