#celery

/

      • micah_chatt
        or part of the same site?
      • dashx
        same site
      • micah_chatt
        do you have celery beat running?
      • or rather, where do you have celery beat running?
      • dashx
        right now each server has celery beat running
      • micah_chatt
        ah ha
      • dashx
        yeah
      • micah_chatt
        that is your problem
      • dashx
        yes i know
      • what is your solution
      • im thinking a separate server
      • micah_chatt
        yea thats what we did
      • mikeliss joined the channel
      • dashx
        that hosts redis, and a celery beat
      • micah_chatt
        [] = server
      • dashx
        and have every webapp server use that redis server
      • so that they can help each other out?
      • micah_chatt
        {} = autoscale group
      • dashx
        so if one server is being lazy it can use more resources on its celery workers?
      • kk
      • micah_chatt
        {[web],[web]} {[worker], [worker]} {[beat]}
      • basically set your celery beat to its own autoscale group
      • it can be tiny
      • since its just kicking off jobs
      • set the desired capacity to 1
      • dashx
        why even bother put it in an auto scale group
      • im thinking redis server with the beat attached to it
      • micah_chatt
        in case you need to change the server configuration
      • or in case the beat server dies
      • its AWS, stuff can happen
      • we also wrote our own wraper process that we run celery beat under, in order to catch the edge cases of 2 celery beat servers being online at the same time when we update the server configuration
      • dashx
        and do you host your redis/rabbit on the beat server?
      • micah_chatt
        noooooo
      • Use Elasticache
      • Djarnis
        crohling88: Thx for pinging me ;) Will be testing sooooooooon ;)
      • micah_chatt
        you want a central broker shared by your beat and all your workers
      • dashx
        holy fkk i can host a redis cache in AWS?!?!
      • micah_chatt
        that your web talks to as well
      • damn straight
      • While your at it, run Postgres on RDS
      • dashx
        im running mysql on RDS
      • micah_chatt
        yea ok
      • Also, AWS protip: use CloudFormation
      • nickabbey joined the channel
      • Djarnis: try setting BROKER_POOL_LIMIT = None
      • Its a performance overhead, but may solve some (dis)connection problems
      • dashx
        well ty for elasticache, that is going to be awesome to use
      • micah_chatt
        no pronlem
      • *problem
      • dashx: its important to use only one broker for your site
      • otherwise its like running mysql on each web server
      • dashx
        micah_chatt, should i use the elasticache as the broker, and the celery workers can use the local redis cache for the result backend?
      • does that make any sense?
      • micah_chatt
        do your tasks return anything meaningful?
      • dashx
        right now no
      • micah_chatt
        set it to None
      • dashx
        and if they did?
      • return something meaninful, that is?
      • gnoze5 joined the channel
      • micah_chatt
        then set the backend to somewhere you would want to collect/observe it
      • If you use RabbitMQ, that is a bad backend
      • Redis isn’t a bad idea
      • dashx
        what about the same broker?
      • micah_chatt
        yea that is fine
      • dashx
        k
      • micah_chatt
        but the DB backend also works and may be easier to query
      • It’ll make your DB useage go up a bit (all depending on the task artifact/number of tasks)
      • elyssonmr has quit
      • Djarnis
        micah_chatt: I think BROKER_POOL_LIMIT = None was the catch … Had it uncommented in the code … Can’t wait til the test results comes in ;)
      • micah_chatt
        ok
      • dashx
        Thanks for the help btw micah.
      • micah_chatt
        no problem
      • dashx: do you know about celery flower?
      • that was the best thing we had to do monitoring with Redis as the broker
      • dashx
        micah_Chatt: I saw it today. IM thinking it will go into the same server thats running my beat
      • maryokhin joined the channel
      • silverfix has quit
      • silverfix joined the channel
      • jgadelange has quit
      • domino14 joined the channel
      • jude1 joined the channel
      • jgadelange joined the channel
      • dashx has quit
      • maryokhin has quit
      • maryokhin joined the channel
      • maryokhin has left the channel
      • crohling88 joined the channel
      • citizen-stig joined the channel
      • crohling88 has quit
      • crohling88 joined the channel
      • silverfix has quit
      • amcm has quit
      • mikeliss has quit
      • mikeliss joined the channel
      • asksol
        micah_chatt: For rabbitmq it's any argument supported by amqp.Connection, or librabbitmq.Connection
      • micah_chatt
        brilliant, thank you!
      • crohling88 has quit
      • micah_chatt has quit
      • bmbouter
        asksol: it's time for my daily ping ;-)
      • asksol
        bmbouter: what did I miss? :)
      • bmbouter
        I randomly drop links to that comment every few days
      • there's just a lot going on so I could see missing it
      • I'm looking for feedback on that plan
      • asksol
        I don't normally read the backlog, if there is a backlog that I need to read it's my mail :(
      • The plan is good, the downside is adding yet another setting we need to manage
      • brb, interview
      • and 4.0 is so complete we just need to sprint a bit to complete the documentation
      • bmbouter
        asksol: I can't imagine you could read all of these things. Can you +1 the plan and re-open the issue
      • perhaps someone on my team can work on it
      • if os, we would do it for both 3.1 and 4.0
      • gnoze5 has quit
      • homeshlice_ joined the channel
      • homeshlice_ joined the channel
      • cjhubert joined the channel
      • citizen-stig joined the channel
      • asksol
        bmbouter: a huge issue is that worker_direct() won't actually have access to the configuration
      • but I guess we could make it lazy and use a WorkerDirect type so that send_task fills it out later
      • The setting name is too specialized I also think, it would be better if there was a CELERY_WORKER_DIRECT_QUEUE, that enables you to set the queue/exchange settings
      • bmbouter
        yes I agree
      • asksol
        CELERY_WORKER_DIRECT_QUEUE = {'auto_delete': True}
      • bmbouter
        that will work for 4.0
      • for backwards compatability on 3.1 I'll suggest the separate setting for auto delete
      • We need to continue on 3.1 for a while longer and we want that in 3.1 so it's our carrot so to speak
      • asksol
        but that means we have to maintain both of them
      • bmbouter
        that's the downside of backwards compatability
      • citizen-stig joined the channel
      • we could make a new setting that does it the 4.0 way in 3.1 and just name it differently in both
      • that would be additive
      • asksol
        yeah, but it means 4.0 needs to support WORKER_DIRECT_QUEUE and WORKER_DIRECT_QUEUE_AUTO_DELETE
      • bmbouter
        no 4.0 could do it totally different than 3.0
      • 3.1 rather
      • according to semantic versioning that would be appropriate
      • asksol
        it could, but that would be breaking the deprecation policy :)
      • if we add this setting to 3.1 we need to support it until at least 5.0
      • atomekk joined the channel
      • atomekk joined the channel
      • bmbouter
        oh interesting I'm not familiar with that
      • where can I read more about that in the docs?
      • asksol
        we use the same policy as Django: https://docs.djangoproject.com/en/1.9/internals...
      • note that there is nothing stopping you from doing what you want in 3.1 already
      • bmbouter
        asksol: I'll think more about this and try to suggest a more cohesive plan
      • I have to go now unforutnately
      • can you describe this more though "a huge issue is that worker_direct() won't actually have access to the configuration"
      • and this "we could make it lazy and use a WorkerDirect type so that send_task fills it out later"
      • I'll see what you write even if I can't respond right now
      • the_rat has quit
      • SomeIdiot joined the channel
      • mikeliss has quit