and have every webapp server use that redis server
so that they can help each other out?
micah_chatt
{} = autoscale group
dashx
so if one server is being lazy it can use more resources on its celery workers?
kk
micah_chatt
{[web],[web]} {[worker], [worker]} {[beat]}
basically set your celery beat to its own autoscale group
it can be tiny
since its just kicking off jobs
set the desired capacity to 1
dashx
why even bother put it in an auto scale group
im thinking redis server with the beat attached to it
micah_chatt
in case you need to change the server configuration
or in case the beat server dies
its AWS, stuff can happen
we also wrote our own wraper process that we run celery beat under, in order to catch the edge cases of 2 celery beat servers being online at the same time when we update the server configuration
dashx
and do you host your redis/rabbit on the beat server?
micah_chatt
noooooo
Use Elasticache
Djarnis
crohling88: Thx for pinging me ;) Will be testing sooooooooon ;)
micah_chatt
you want a central broker shared by your beat and all your workers
I randomly drop links to that comment every few days
there's just a lot going on so I could see missing it
I'm looking for feedback on that plan
asksol
I don't normally read the backlog, if there is a backlog that I need to read it's my mail :(
The plan is good, the downside is adding yet another setting we need to manage
brb, interview
and 4.0 is so complete we just need to sprint a bit to complete the documentation
bmbouter
asksol: I can't imagine you could read all of these things. Can you +1 the plan and re-open the issue
perhaps someone on my team can work on it
if os, we would do it for both 3.1 and 4.0
gnoze5 has quit
homeshlice_ joined the channel
homeshlice_ joined the channel
cjhubert joined the channel
citizen-stig joined the channel
asksol
bmbouter: a huge issue is that worker_direct() won't actually have access to the configuration
but I guess we could make it lazy and use a WorkerDirect type so that send_task fills it out later
The setting name is too specialized I also think, it would be better if there was a CELERY_WORKER_DIRECT_QUEUE, that enables you to set the queue/exchange settings