#celery

/

      • zbp joined the channel
      • frege
        should my_app in "celery -A my_app worker" be the path to my app?
      • I'm getting this error; Tue Jul 29 12:40:46 2014 - [uwsgi-daemons] "/home/sharedbay/venv/bin/celery -A sharedbay worker" (pid: 2516) did not daemonize !!!
      • xkillac4
        asksol: so there seems to be some delay in the taskPool and task_accepted lines. What triggers those outputs?
      • nicksloan
        xkillac4: IIRC, that is left to the user to implement
      • oh
      • whoops, was scrolled waaaaaaaay back
      • xkillac4
        nocksloan: hehe, no problem
      • Debnet has quit
      • any help interpreting my last gist is much appreciated :) I'm sending 10 tasks in a for loop, and they just return empty list. Not sure why it's taking 10sec for them all to get executed.
      • Debnet joined the channel
      • nicksloan
        asksol: so going back to my question from yesterday... the -Q option should restrict a worker to working on a single queue regardless of whether CELERY_QUEUES are defined or inferred, right?
      • any idea why that might not be happening?
      • asksol
        xkillac4: could be that there is some timer latency
      • frege: the -A argument is really the full path to the app attribute, so e.g. -A proj.celery:app means the worker will do "from proj.celery import app"
      • nicksloan: yeah, the -Q option calls app.queues.select_subset which the consumer uses as a list of queues to consume from
      • nicksloan
        asksol: hmm
      • so either there is a weird bug there, or, more likely, my routing is broken
      • frege
        gotcha
      • asksol
        frege: -A proj is for convenience and will look for the app
      • frege: first -A proj:app then -A proj.celery:app from the conventions in the guides, then any app instances from those modules
      • xkillac4
        asksol: 8sec worth? Behavior changes if I change "@shared_task(rate_limit='5/s')" to "@shared_task", all 10 tasks finish in 130ms. No latency at all.
      • asksol
        xkillac4: it takes 8 seconds for the 3rd task to start?
      • xkillac4
        asksol: takes 8 seconds for all 10 tasks to finish
      • asksol: for 3rd and 4th tasks, "TaskPool: Apply" line occurs almost exactly 1sec after first two tasks finish; "Task Accepted" line occurs almost exactly 1sec later; then tasks finish in about 50ms
      • asksol
        right, and my output prints that the 3rd task should be in the timer for 0.1957 seconds but seems to spend 0.5s for that to happen
      • the timer is extremely accurate for eta tasks tough, so not sure where the latency is coming from
      • Nerevar joined the channel
      • and it should have applied more than one task after those 0.5s
      • xkillac4
        asksol: time for a bug report?
      • Debnet`Work has quit
      • or, where to go from here... willing to get my hands dirty
      • Debnet`Work joined the channel
      • Nerevar has quit
      • frodopwns
        xkillac4: are you on the newest release of celery?
      • i would consider spinning up a vm and seeing if you can get rate_limit to work without all the other stuff in your app potentially causing conflicts
      • xkillac4
        frodopwns: on 3.1.13
      • frodopwns: my app isn't basically idling and I'm launching tasks from shell. however, maybe I'll just try blowing away everything and starting again with celery install
      • frodopwns1 joined the channel
      • frodopwns: I had 3.1.11 installed, then upgraded with pip install celery --upgrade
      • codeitloadit has quit
      • codeitloadit joined the channel
      • frodopwns has quit
      • asksol
        xkillac4: yeah, it definitely seem to trigger the timer too late, calling fire_timers more often didn't help though
      • it used to use a separate thread for this
      • probably something stupid
      • SegFaultAX has quit
      • I did test it when I rewrote the code, could be a recent regression
      • SegFaultAX joined the channel
      • bbradley has quit
      • circutnub joined the channel
      • arustam joined the channel
      • frodopwns1 has quit
      • silverfix has quit
      • bbradley joined the channel
      • mahmoudimus joined the channel
      • b8sell has quit
      • mozilla joined the channel
      • mozilla is now known as paw
      • mech422 joined the channel
      • paw
        Hello all, I am using celery in my application for queing and running tasks asynchronously. I need to be able to add scheduling to my application. The schedules should be editable. How do I proceed with it? Is django-celery the only option?
      • nicksloan
        asksol: the way my project is currently set up, I call several tasks with the same input, and upon completion, those tasks all call a single task with their output. Finally, that task calls another task it's output. I'm not using groups or chains or chords or callbacks or anything. I just call nexttask.delay() in each task. Is there a good reason to change it all now?
      • asksol
        xkillac4: try this patch: diff --git a/celery/worker/consumer.py b/celery/worker/consumer.py
      • index 8afa85b..7bf4576 100644
      • --- a/celery/worker/consumer.py
      • +++ b/celery/worker/consumer.py
      • @@ -182,6 +182,7 @@ class Consumer(object):
      • self._restart_state = restart_state(maxR=5, maxT=1)
      • self._does_info = logger.isEnabledFor(logging.INFO)
      • + self._limit_order = 0
      • self.on_task_request = on_task_request
      • self.on_task_message = set()
      • self.amqheartbeat_rate = self.app.conf.BROKER_HEARTBEAT_CHECKRATE
      • @@ -252,11 +253,17 @@ class Consumer(object):
      • else self.qos.increment_eventually)(
      • abs(index) * self.prefetch_multiplier)
      • + def _limit_move_to_pool(self, request):
      • + task_reserved(request)
      • + self.on_task_request(request)
      • +
      • def _limit_task(self, request, bucket, tokens):
      • if not bucket.can_consume(tokens):
      • hold = bucket.expected_time(tokens)
      • + pri = self._limit_order = (self._limit_order + 1) % 10
      • self.timer.call_after(
      • - hold, self._limit_task, (request, bucket, tokens),
      • + hold, self._limit_move_to_pool, (request, ),
      • + priority=pri,
      • )
      • else:
      • task_reserved(request)
      • ahg
      • f**
      • xkillac4
        lol
      • asksol
      • pastie doesn't help much if you forget to copy the url ;)
      • paw: the djcelery periodic task scheduler is the only database backed scheduler, but you can use it standalone without using django in your project
      • nicksloan
        asksol: it seems like a chord with a chain as the callback would suit my needs. Other than simplifying things though, do I gain anything?
      • asksol
        nicksloan: not really, but could be when parent_id/root_id is coming and flower etc will support it, could be useful for monitoring etc, but you could add that to your custom solution as well
      • nicksloan
        asksol: yeah... built in a hurry when I just started learning about celery. Really regret not taking more time.
      • johnraz has quit
      • prefer to avoid "custom" for things like this
      • paw
        asksol: Thanks! I have some confusion. Is django-celery still required if I am currently using celery 3.1?
      • asksol
        paw: no, but it contains models if you want to use the Django db as a result backend, and the periodic task database scheduler
      • loic84_ joined the channel
      • so it can be used as an add-on, so to say
      • paw
        asksol: Okay! So I still need to use it in order to be able to modify my scheduled tasks..
      • asksol
        it also contains the old views for checking task results and so on, but that is very simple to do yourself
      • if you want to store the periodic tasks schedule in a database, yes
      • paw
        asksol: I assume that there is no other way to be able to modify/delete/add periodic tasks in runtime
      • loic84 has quit
      • asksol
        you can write your own scheduler subclass and use whatever shared storage you want, but the one in djcelery is fairly complete
      • it also contains the django admin interface to modify periodic tasks
      • xkillac4: I think that should work, it seemed to me like it took 2 tokens out of the bucket for every task when the limit is for more than one task/s. 1/s would work and so would n/m, etc
      • paw
        asksol: cool! Thanks for your answer! Helped me clear some confusion...
      • Nizumzen has quit
      • xkillac4
        asksol: thanks for the patch, just gave it a whirl. It's close. I sent 10 tasks; it executed 2, waited 2sec, then executed 8. On average, 5/sec but unfortunately I need no more than 5/sec to avoid rate limits
      • circutnub has quit
      • afaraone has left the channel
      • afaraone joined the channel
      • asksol: does CELERYD_TIMER_PRECISION do anything?
      • negval joined the channel
      • arustam has quit
      • nicksloan
        asksol: yeah, I'd say my routing is fucked up: http://i.imgur.com/YFYUCIl.png (one call, these are supposed to work like a group)
      • hell_razer joined the channel
      • celery-doku
      • SirCmpwn has quit
      • HorizonXP has quit
      • bbradley has quit
      • bbradley joined the channel
      • SirCmpwn joined the channel
      • HorizonXP joined the channel
      • frodopwns joined the channel
      • frodopwns has quit
      • brockhaywood has quit
      • loic84_ has quit
      • loic84 joined the channel
      • b8sell joined the channel
      • brockhaywood joined the channel
      • Nizumzen joined the channel
      • zbp has quit
      • I am truly baffled by this issue
      • maryokhin joined the channel
      • afaraone
        asksol — this ongoing semaphore issue may be impetus for me to upgrade frm python 2.6. If I do this, are there any strong arguments from the celery perspective for going to 2.7 vs 3.x?
      • SoftwareMaven has quit
      • cjellick has quit
      • cjellick joined the channel
      • cjellick has quit