has anyone managed to run flower behind / under / with nginx?
I have set up a proxy_pass with nginx, but all the links in the templates point to 127.0.0.1 and there are no css and js (I assume because of the same reason)
’lo all, so I’m using amqp as the result backend but unfortunately had a lot of errant tasks which should’ve had ignore_result=True but didn’t and config CELERY_RESULT_PERSISENT was True…so now my RabbitMQ is full of result queues and I’d like a way to get rid of these messages
keeping in mind some of the results /are/ actually important :/
s/these messages/their messages/
FunkyBob
eppa: it's more that the expiration timeout is disabled
AIUI CELERY_RESULT_PERSISENT just tells the queue to make them restart-proof
eppa
FunkyBob: ah sorry yep you’re right
(FunkyBob and I are working together)
ionelmc
__rodrigo__: could it be a missing slash at the end of the prefix?!
__rodrigo__
ionelmc, do you see the navbar and the error?
ionelmc, I don't know, let me try
FunkyBob
eppa: makes me wonder how many old results are polluting the DB
ionelmc
it's likely some flower configuration issue
eppa
so I guess my question is: will setting CELERY_RESULT_PERSISTENT to some timedelta clean up the old results as well as any new ones?
ionelmc
sadly i don't use flower
the_rat_ joined the channel
FunkyBob
eppa: I thought it was CELERY_TASK_RESULT_EXPIRES that was the issue
ionelmc
eppa: keeping the results?! the amqp backend deletes the results aftet they are consumed ( .get() or whatever )
FunkyBob
ionelmc: yes, but we have nothing trying to consume the results, AFAIK
__rodrigo__
ionelmc, nope, --url_prefix=/flower/ does the same
eppa
FunkyBob: shit, paste buffer screw up sorry
FunkyBob
ionelmc: the tasks were erroneously not marked ignore_result
just add the rewrite and try to forget the horror or urlprefix-thats-not-really-a-prefix
s/or/of/
__rodrigo__
ionelmc, thanks, thanks a loooot!
eppa
so, does anyone happen to know if setting CELERY_TASK_RESULT_EXPIRES to some $seconds works retroactively on task results, or only for new task results?
ionelmc
i don't think so
there's no way to do it retroactively
*reliable way*
eppa
ionelmc: I see, OK
snewpy
eppa: I think it depends on the backend... if it's a backend that requires celerybeat to reap old results, then I think it does, but for rabbitmq or redis no
ionelmc
so did you guys find the cause of the segfault ?
sam_stav has quit
tombatron joined the channel
eppa
snewpy: OK, so I’ll have to think of something else then thanks :)
bkuberek has quit
frgtn has quit
Ergo has quit
k_sze[work] joined the channel
FunkyBob
ionelmc: not directly.. just having old dj-celery setup called and using new API...
as I keep telling you
ionelmc
FunkyBob: well yeah, i hoped you have a more detailed explanation now :)
normal python code can't segfault - you don't have direct memory access
djcelery doesn't use ctypes or c extensions (beyond what django uses) as far as i know
davidcramer has left the channel
davetenuki joined the channel
FunkyBob
yeah, I know... as soon as I saw segfault, I assumed it was a clib somewher
eciv
FunkyBob, eppa, one way would be to just purge all of rabbitmq
FunkyBob
eciv: as eppa said, we don't want to lose results for tasks that we _do_ care about them for
FunkyBob, ah, well yup you're pretty much stuck with making a script that reads the logs for the tasks you want to purge, get their task ids, and get() those tasks
__rodrigo__, nice!
__rodrigo__
eciv, thanks! :)
FunkyBob
ionelmc: I think our resulting fix also included completely uninstalling django-celery, celery, kombi and billiar... and then installing celery[librabbitmq] ... just to be sure
but that was before I discovered djcelery was still in the mix
I don't believe we've managed to replicate the issue in development... and we're certainly not going to set it up again in production... so, I'm afraid it's something left to the "there have been reports" pile
eggonlegs joined the channel
k_sze[work] has quit
eggonlegs has left the channel
k_sze[work] joined the channel
eciv has quit
k_sze[work] has quit
k_sze[work] joined the channel
bigjools has quit
ionelmc: I don't suppose there's a blessed way to set ignore_result=True to be the default?
bigjools joined the channel
gak has quit
thekev has quit
eppa
FunkyBob: I know there’s CELERY_IGNORE_RESULT config option, but can tasks override that on an individual basis I wonder?