#elixir-lang

/

      • justicefries joined the channel
      • whatyouhide has quit
      • status402 has quit
      • Rich_Morin
        How do I get information on the current execution stack (ala Ruby's Object#caller() method)?
      • sudshekhar joined the channel
      • The closest thing I've found would be to throw an exception, catch it and run stacktrace() - this seems more than a little baroque...
      • chrismcc_ joined the channel
      • chrismccord has quit
      • oneeman
        ~~ Macro.Env.stacktrace(__ENV__)
      • beamie
        [{:elixir_compiler, :__FILE__, 1, [file: "code/expr.exs", line: 1]}]
      • cpup joined the channel
      • oneeman
        but use __CALLER__ instead of __ENV__
      • oops, nevermind, use __ENV__, not __CALLER__
      • Rich_Morin: ^^^
      • Rich_Morin
        That only gives me one level.
      • oneeman
        oh, don't know then
      • JuanMiguel has quit
      • jschneck joined the channel
      • Rich_Morin
        The problem is that it tells me about the tracing function, rather than the cade that called the tracing function...
      • jschneck has quit
      • oneeman
        I see what you mean
      • jakehow joined the channel
      • slyv has quit
      • cpup joined the channel
      • rbishop has quit
      • josevalim joined the channel
      • antkong joined the channel
      • chrisconstantin joined the channel
      • josevalim
        chrismcc_: back
      • chrismcc_
        josevalim: same
      • chrismcc_ is now known as chrismccord
      • josevalim
        Rich_Morin: there is also Process.info(self, :current_stacktrace) or something
      • Kabie_ joined the channel
      • chrismccord: ah, we do have phoenix_namespace
      • now, good point
      • chrismccord
        josevalim: yes :(
      • but we could rename that too
      • I agree "namespace" isn't a word we should promote
      • josevalim
        i don't have any suggestion though
      • app_alias?
      • app_alias is weird...
      • Rich_Morin
        josevalim: c ool; I'll check it out
      • chrismccord
        root_alias?
      • base_alias
      • josevalim
        chrismccord: my concern is the alias part :)
      • chriscon_ joined the channel
      • I think app is the most declarative part in it right now :P
      • chrismccord: for Redis, i think we should take the back-off
      • chrisco__ joined the channel
      • chrismccord: and just retry every X seconds
      • chrismccord
        josevalim: That's what we do
      • well, we fail for supervisor restart after 15s
      • josevalim
        you still fail after X attempts
      • chrismccord
        Oh, you'd prefer just keep on biz as usual
      • fishcakez_
        yes
      • Kabie_ has quit
      • chrismccord
        kk
      • josevalim
        chrismccord: right. that will trigger a local restart, causing local and channels to crash
      • chrismccord: also, we need to bring status: :disconnected back (or something of sorts)
      • chrismccord
        It's still there
      • josevalim
        ah, sorry
      • hold on
      • chrisconstantin has quit
      • chrismccord: ah!!!!
      • chrismccord: you are delegating broadcast to the client
      • chriscon_ has quit
      • chrismccord
        yar
      • only for redis right now
      • josevalim
        chrismccord: now the pool would be welcome
      • chrismccord
        I did for PG2 also, but the failure semantics get complex
      • Ie, if the pg2 group disapeers, I had to send a message back to the server to go down
      • and it was only like 1% perf diff on my benchmarks (for pg2)
      • josevalim
        cool
      • chrisco__ has quit
      • chrismccord
        But redis savings are huge and since we have eredis taking care of the conn for us, we can let the client broadcast
      • cpup joined the channel
      • josevalim
        chrismccord: you were not seeing timeouts then?
      • eredis without pool can easily become a bottleneck
      • chrismccord
        No timoutes from seejee's stress tester
      • josevalim
        ok
      • chrismccord
        but the node benchmarker might not be able to push enough
      • josevalim
        chrismccord: how many processes pushing to redis is in the benchmark?
      • in other words, is it testing concurrency?
      • because it will be fine if you have just a handful of things pushing
      • chrismccord: but this is exactly the scenario that requires a pool (the scenario we didn't have before in redis)
      • chrismccord
        josevalim: I'm not terribly sure tbh. seejee would know better on how many conns
      • josevalim
        chrismccord: the pool code is still around, right?
      • chrismccord
        josevalim: Yeah I can dig it up
      • josevalim
        also, what happens if the eredis_client is dead and you broadcast?
      • does it return {:error, _}?
      • chrismccord
        yep
      • josevalim
        beautiful, that's exactly what goes in the pool
      • chrismccord
        josevalim: so we can have that function just :poolboy.transaction
      • right?
      • josevalim
        yes
      • is eredis_client lazy?
      • can we put them directly under poolboy or do we need to wrap them?
      • chrismccord
        I think they can go directly under poolboy actually
      • we weren't doing that before, but now all we need is the pid
      • josevalim
        chrismccord: we need to document all the pool options then
      • what happens if eredis_client cannot connect on start?
      • they crash though, right?
      • chrismccord
        I don't know. I assume, but you're right, unless we want to spam redis with out pooled retries, we need to wrap
      • josevalim
        chrismccord: check if you can start the app with redis down by pooling directly the client
      • if you can, we are good
      • fishcakez_: ^
      • do you agree with this?
      • fishcakez_
        link the client?
      • josevalim
        no, that if redis is not available on start, the worker needs to start regardless?
      • fishcakez_
        yes
      • josevalim
        chrismccord: so we need to wrap
      • chrismccord: connect on first command and if you can't connect, return the error
      • chrismccord: same pattern we do in Ecto
      • i don't know why eredis does not support lazy starts
      • would make things easier
      • chrismccord
        josevalim: here's what we were doing with the pool https://github.com/phoenixframework/phoenix/blo...
      • it lazily started on first command
      • josevalim
        chrismccord: yeah
      • nah, now i am thinking about leaving things as is
      • but it does sound like we should pool
      • fishcakez_
        easier not to pool until you need to
      • kurko_ has quit
      • josevalim
        chrismccord: if fishcakez_ said it, then it is said
      • chrismccord
        haha copy that
      • We'll be able to avoid the poolboy dep too until it's needed
      • Okay max_conn_attempts is gone
      • josevalim
        but i still think moving the broadcaster out will make things easier
      • rgrinberg1 joined the channel
      • we can still enter some weird states
      • chrismccord: for example, eredis_pid can be nil
      • chrismccord
        josevalim: It could until my last commit
      • like two min ago
      • fishcakez_
        oh rabbitmq backend, that is cool
      • chrismccord
      • josevalim
        but it is kind of fake
      • chrismccord: because that disconnect is about the sub being disconnected
      • chrismccord
        josevalim: damnit! you're right
      • gahhh eredis
      • I don't understand why eredis_sub can't use the :eredis api
      • josevalim
        chrismccord: there are still other weird states you could enter
      • i think if the sub drops, when it is back
      • you replace the eredis sub without killing it
      • cpup joined the channel
      • eerrr
      • the eredis client
      • so you can have two copies
      • chrismccord
        josevalim: that's fine, bcaus the client will get {:error, whatev} from :eredis.q
      • josevalim
        chrismccord: no, i mean you can have multiple eredis running, orphaned
      • sub dies, sub is back, you reconnect starting a new eredis without killing the old one