#celery

/

      • ep0
        Lock wait timeout exceeded; try restarting transaction
      • and the task has only a print statement
      • kaakku has quit
      • does anyone have any idea why that occurs ?
      • sokr has quit
      • domino14 joined the channel
      • rvrsh3ll joined the channel
      • Magicking joined the channel
      • ZenGeist joined the channel
      • ninkotech joined the channel
      • Magicking
        Hi, I was wondering what is the best way to use Celery's tasks with multiple projects to do IPC, right now I think I'll use a library to ease communication between the projects but I'm not sure about that design, it seems a big boiler plate for few placeholders
      • bkuberek joined the channel
      • ep0
        @magicking what are you trying to achieve exactly ?
      • you have multiple projects
      • and each project runs multiple processes ?
      • Magicking
        There is a django app and the two others projects do cpu work
      • ep0
        and the other projects need to exchange data right ?
      • Magicking
        Yup
      • ceejh joined the channel
      • ep0
        you're using celery to run tasks defined in those projects ?
      • Magicking
        Yup
      • I plan to
      • ep0
        if you're doing that, there's a big change that projects won't run in the same time
      • Magicking
        Using the library with naming convention works but doesn't seem right
      • ep0
        more specifically, they won't be in the same phase of execution where process A sends some data and process B picks it up right away
      • so you're looking for a pipeline design
      • Magicking
        Yeah, that's not a problem
      • There is some sync and async task
      • ep0
        sync and async related to celery ?
      • for celery sync means that it will wait until the task is finished
      • and async means that a worker will run the task and put the result in a backend service
      • Magicking
        Yup, exactly that
      • But your keywords pipeline, seems to be what I needed
      • ep0
        well, the purpose of celery is to allow you to run background tasks
      • so as to not hold the main thread
      • Magicking
        Yeah but it's an easy way to do IPC, that why I'm using the sync
      • ep0
        then you're not using celery to it's full potential
      • i was using this at my last job
      • where some servers would gather data from various sources across the web
      • they were running endlessly and that was their only job: get data from web
      • but each url would have a different response time (network hickups, lag, etc)
      • but each process would place data in a queue on kestrel
      • Magicking
        I'm gonna check it out thank you
      • ep0
        other processes on other servers would pick that data and process it and put results on another queue
      • and finally, other processes would take that data and save it in database
      • you're welcome
      • daltiparmak has quit
      • thedodd joined the channel
      • kaakku joined the channel
      • ryanhiebert has quit
      • ryanhiebert joined the channel
      • domino14 joined the channel
      • izquierdo has quit
      • domino14 has quit
      • ignas joined the channel
      • ep0 has left the channel
      • italorossi has quit
      • italorossi joined the channel
      • peterloorke has quit
      • kzaitsev_ joined the channel
      • boris317 joined the channel
      • the_rat has quit
      • k_sze[work] has quit
      • the_rat joined the channel
      • sai has quit
      • zildjian has quit
      • zildjian joined the channel
      • anystrom joined the channel
      • adrian_lc has quit
      • maryokhin has quit
      • brockhaywood joined the channel
      • brockhaywood has quit
      • obec joined the channel
      • kaakku has quit
      • anystrom
        I’m having having issues running Celery tasks that use quite a bit of RAM (about 2 GB) on an Amazon EC2 machine. I have plenty of RAM, so it seems it ought to work. I’m using preforking concurrency with c=1, and I’m getting a memory allocation error when os.fork() is called. Is there some memory restriction on Celery processes or something?
      • maryokhin joined the channel
      • frasergraham joined the channel
      • Whisket joined the channel
      • brockhaywood joined the channel
      • Whisket
        I have a redis Backend that is storing my results. How do I pop results from the back end one by one so I can process the result values individually
      • brockhaywood has quit
      • brockhaywood joined the channel
      • Elsmorian joined the channel
      • c9mm9dore joined the channel
      • yeshuah joined the channel
      • keisetsu joined the channel
      • Elsmorian
        Hi all, we are having some trouble getting celery to put tasks on anything but teh default celery queue called 'celery'. We can create the queue in Rabbit by setting a worker to consume from it, but we have tried setting the routing, setting the default queue, but nothing is working.
      • (via django)
      • keisetsu
        Elsmorian: You have to set the queues in your app config.
      • Elsmorian
        have tried this, in our django settings, but celery doesnt seem to change its behaviour
      • obec has quit
      • keisetsu
      • I'm not certain how it's done using django, but should be similar. I know I had a lot of trouble getting this all straight at first.
      • Elsmorian
        keisetsu: yeah we have set these settings, but to no avail
      • Can you see what settings celery is using from a python shell?
      • keisetsu
        Elsmorian: if you pastebin your celery-related settings, I might be able to help. Like I said, I'm not familiar with django. You should be able to see the settings in a django shell, can't remember the name of the property (yourapp.conf or something like that)
      • Elsmorian: You've probably seen this, but just in case: http://www.caktusgroup.com/blog/2014/06/23/sche...
      • murk joined the channel
      • murk has quit
      • Ok, I've got a question. Anyone know if/how I can provide kwargs to the link_error callback task when serializing without pickle? Can't seem to find much about it in the documentation.
      • jbrhbr joined the channel
      • murk joined the channel
      • sputnik13 joined the channel
      • jbrhbr has quit
      • jbrhbr joined the channel
      • Whisket
        Nobody here know how to iterate over results in Backend?
      • or how to get the results of a periodic task
      • frasergraham joined the channel
      • kzaitsev_ has quit
      • kzaitsev_ joined the channel
      • I'm convinced there is no way to iterate over Periodic Tasks results. It seems that Celery was designed to make that not possible
      • Which is very unfortunate
      • bkuberek has quit
      • bkuberek joined the channel
      • kzaitsev_ has quit
      • bkuberek joined the channel
      • kzaitsev_ joined the channel
      • obec joined the channel
      • kzaitsev_ has quit
      • phlax joined the channel
      • poofmooter has quit
      • phlax
        hi im using django and celery - when i run django using runserver celery tasks get queued fine - but when i run as wsgi app using gunicorn tasks dont seem to get scheduled - any ideas why that might happen?
      • jbrhbr has quit
      • daltiparmak joined the channel
      • jbrhbr joined the channel
      • jbrhbr has quit
      • jbrhbr joined the channel
      • yeshuah joined the channel
      • emperorcezar joined the channel
      • hack72 has quit
      • hack72 joined the channel
      • murk
        phlax: There's a variety of things that can be misconfigured.. Compare your setup to http://michal.karzynski.pl/blog/2013/06/09/djan... and http://michal.karzynski.pl/blog/2014/05/18/sett... and you should be on your way in notime
      • domino14 joined the channel
      • domino14 has quit
      • myusuf3 joined the channel
      • jbrhbr
        for anyone who has been following my ongoing ramblings about Bad Frame Read, i can reproduce the problem now with the latest build of librabbitmq in a very small reproduction case
      • howeverrrrrr…. if don't use my custom librabbitmq version and just use whatever gets pulled by pip, there is no error in this test case
      • so maybe my local build is messed up, or my rabbit config is weird, or this latest librabbitmq code just has a bug in it
      • ionelmc: i get that assertion failure that you ran into periodically as well
      • ionelmc
        jbrhbr: which issue?
      • jbrhbr
        let me see if i can find the message again but you posted about it in some thread a while back
      • this one
      • but it's inconsistent
      • ionelmc
        jbrhbr: i wasn't able to reproduce it anymore sorry
      • jbrhbr
        don't be sorry :) i'm just mentioning because i saw your name there
      • the setup in the gist i posted a moment ago encounters it periodically
      • ionelmc
        jbrhbr: and i'm not using the asynpool anymore, my app is not affected, kinda hard to make up the will to chase race conditions now :)