hi, i’ve a live system that has been using pickle.
i want to it to start using json, but in such a way that new tasks are serialized with json, old ones are processed but it basically stops to use pickle
rpkilby has quit
amcm
Off the top of my head, I'd change the queue for the new json format, start new workers on that queue, and leave the old workers running to finish consuming the pickle queue
rpkilby joined the channel
shiriru has quit
scruz has quit
rpkilby has quit
kennethreitz joined the channel
Audioburn joined the channel
gnoze5 joined the channel
kennethreitz has quit
rpkilby joined the channel
jvazquez joined the channel
jvazquez
Hello. I was asked a particular question about celery, that I can't find an answer. How many tasks can you put inside celery queues?. I'm using redis as the backend. Is this determined by the max amount of memory available on redis?
It sounds like the question on optimizing celery, ❝ How much water flows out of the Mississippi River in a day? ❞
shredding has quit
amcm
jvazquez yeah, you're limited by the storage available in your broker and how fast you're consuming the tasks.
Are you trying to figure out how many tasks you can hold and not process?
jvazquez
no, they just simply asked, how many tasks can you place inside the queues
So, that would be a relation between how much I can process and the amount of memory that I have on redis
amcm
Do they mean, different tasks? Like sum, add, mul ?
claudiu04 has quit
jvazquez
Yes, different tasks
nickabbey joined the channel
amcm
So that question is "How many tasks can you register" I don't think there's a real limit to that.
beyond typical limits like your program should fit in RAM
karanlyons
Is there a way to get more performant behavior with multiple workers and eta'd tasks? I think right now I'm not properly saturating the workers with tasks because eta'd tasks ignore prefetch/-Ofair settings.
jvazquez
thanks amcm
domino14_ joined the channel
kennethreitz joined the channel
nickabbey
anyone familiar with celery.contrib.rdb know if you can / how to use rdb.Pdb.runcall() to step through a specific task that doens't have a breakpoint set?
kaakku has quit
domino14_ has quit
rpkilby has quit
rpkilby joined the channel
ultimateboy joined the channel
malinoff joined the channel
nickabbey has quit
rubendv has quit
maryokhin joined the channel
nickabbey joined the channel
domino14_ joined the channel
cornfeedhobo has left the channel
domino14 has quit
domino14 joined the channel
domino14 has quit
kennethreitz has quit
sileht joined the channel
travis-ci joined the channel
travis-ci
celery/celery#2939 (master - 4d998d1 : Ask Solem): The build was fixed.
hello all. Is it possible to install celery in the local directory where my project will run? or it needs to be installed either globally or inside a virtualenv?
my project will run both in ubuntu 14 and 12, and I noticed that standard deb versions vary from 2.4 to 3
the_rat joined the channel
and from what I saw in the doc examples the celery api is different in both versions. Not sure if celery 3 is backward compatible though
domino14_ has quit
doismellburning joined the channel
gnoze5 has quit
Ergo joined the channel
kennethreitz joined the channel
domino14 joined the channel
rpkilby has quit
kennethreitz has quit
kennethreitz joined the channel
rubendv has quit
avd74 has quit
avd74 joined the channel
ultimateboy has quit
ultimateboy joined the channel
ultimateboy joined the channel
InfoTest joined the channel
gnoze5 joined the channel
rpkilby joined the channel
ultimateboy has quit
kennethreitz has quit
ultimateboy joined the channel
ultimateboy has quit
ultimateboy joined the channel
ultimateboy has quit
sbhawsin has quit
kennethreitz joined the channel
domino14_ joined the channel
domino14 has quit
pfallenop has quit
domino14 joined the channel
avd74 has quit
avd74 joined the channel
pfallenop joined the channel
domino14 has quit
nickabbey has quit
pfallenop has quit
pfallenop joined the channel
pfallenop has quit
pfallenop joined the channel
jvazquez has quit
jamescarr joined the channel
jamescarr
this might be silly... but can I have an app with a task defined as noop that is used to publish messages...
and then have the consuming app have the actual implementation?
amcm
jamescarr you can publish an unregistered task by name, don't need to fake it like that
jamescarr
ah thank you
amcm
Check the docs, it's in there somewhere
Ergo has quit
frodopwns joined the channel
frodopwns1 joined the channel
frodopwns joined the channel
frodopwns1 has quit
frodopwns has quit
frodopwns joined the channel
amcm has quit
mac_ified joined the channel
kennethreitz has quit
frodopwns1 joined the channel
avd74 has quit
flonka has quit
frodopwns joined the channel
frodopwns1 has quit
jamescarr has quit
ultimateboy joined the channel
kennethreitz joined the channel
domino14 joined the channel
fxfitz
So, I'm trying to configure a backend for the first time. Specifically, ironcache. Well, Celery seems to create the cache but I'm not seeing anything being thrown into the cache.
Now I don't have a good environment set up for my worker to succeed on, but I'm still expecting to see a failture since I'm seeing this log message: Task worker.install[15ca23a3-xxxxxxx52f2c3] succeeded in 0.92937989402s: False
Any idea why I wouldn't be seeing any key/values thrown into the cache?
domino14 has quit
kennethreitz has quit
felixn
anyone have a "job reporter" they like for Celery? I may just roll my own, but I really just want to see rate of a given job & progress to finish, not sure if that's easy to see with Flower (we have lots of complex workflows). I've used logentries in the past, and could do basic aggregation, but it had a lot of limitations
blamar has quit
fxfitz
felixn: Isn't that a Celery backend? I'm not sure, I'm just starting to look into these now.
blamar joined the channel
felixn
fxfitz: not sure! maybe see if it works with a local cache, then go from there
fxfitz
felixn: Yeah, so it seems liek the point of a backend is to keep track of current status of your worker.