#apache-kafka

/

      • Rob_vH
        I'm looking for information on handling offset rollbacks on high-level consumers when (due to non-rdkafka reasons) the message consumption could not succeed. Say, the message requires the HL consumer to write out to elastic search but it's down and has timeout out, we want to un-commit the offset so that the message gets processed. I've googled for "rdkafka high level consumer" + "set offset" "rollback offset" "uncommit" and rea
      • robusto joined the channel
      • snaps: ditto if we have to respond to a sigterm and haven't/cannot finish our work; we want to be able to put back the offset so that the data isn't lost. If you can recommend some reading for this, a resource, or better search terms, I'd appreciate it.
      • tbsf joined the channel
      • hugh_jass joined the channel
      • rthille-ciena joined the channel
      • wombawomba joined the channel
      • procton joined the channel
      • Hartimer has quit
      • Hartimer joined the channel
      • procton has quit
      • holmser joined the channel
      • hugh_jass has quit
      • hugh_jass joined the channel
      • hugh_jass has quit
      • hugh_jass joined the channel
      • SkyRocknRoll joined the channel
      • Hartimer has quit
      • nicholasserra joined the channel
      • nicholasserra
        Hello.Trying to get username/password auth working with kafka. My current settings allow me to connect only with the right authentication details, but for some reason sending a message times out. Here's my settings, does this behavior sound familiar to anyone? Thanks!
      • security.inter.broker.protocol=SASL_PLAINTEXT
      • sasl.mechanism.inter.broker.protocol=PLAIN
      • sasl.enabled.mechanisms=PLAIN
      • listeners=SASL_PLAINTEXT://example.com:9092
      • hugh_jass joined the channel
      • tbsf_ joined the channel
      • elakito joined the channel
      • al-damiri joined the channel
      • SkyRocknRoll_ joined the channel
      • hugh_jass joined the channel
      • Hartimer joined the channel
      • tbsf joined the channel
      • hugh_jass joined the channel
      • tbsf joined the channel
      • Sasazuka joined the channel
      • wombawomba has quit
      • tbsf joined the channel
      • tbsf joined the channel
      • hugh_jass joined the channel
      • elakito has quit
      • wombawomba joined the channel
      • DevOpsLess joined the channel
      • Sasazuka has quit
      • Sasazuka joined the channel
      • wombawomba has quit
      • timg__ joined the channel
      • hugh_jass joined the channel
      • dimberman has left the channel
      • dimberman joined the channel
      • wombawomba joined the channel
      • Snaps
        jmasseo: havent seen anything like that, unless your application links with a special resolver that overloads getaddrinfo() perhaps?
      • jmasseo: also make sure the broker's advertised.hostnames are correct and reachable from the client. (this is a common problem with docker, for instance)
      • jmasseo
        yeah i added them to /etc/hosts, ran dnsmasq, etc.
      • i think it is more likely to be the former
      • Snaps
        jmasseo: do you have a log sample to show?
      • jmasseo
        or something of that ilk.
      • since the sample consumer resolves fine
      • %3|1490673540.566|FAIL|rdkafka#consumer-1| [thrd:broker2:9092/2]: broker2:9092/2: Failed to resolve 'broker2:9092': Name or service not known
      • i did track it down in the rdkafka code to the getaddrinfo call
      • but i got no answers. the application i'm extending seems pretty highly questionable anyway.
      • timg__ has quit
      • Hartimer has quit
      • Snaps
        jmasseo: yeah, librdkafka isnt doing anything special, just calls the libc resolver.
      • Hartimer joined the channel
      • rob_vh: disable auto commits and manually commit offsets to your liking, this allows you to commit an older offset.
      • jmasseo
        right. i was just hoping you'd seen something similar before. thanks anyway!
      • tupty has quit
      • Sasazuka has quit
      • Sasazuka joined the channel
      • dimberman has left the channel
      • hugh_jass joined the channel
      • DevOpsLess has quit
      • dimberman joined the channel
      • myusuf3 joined the channel
      • myusuf3
        I am near disk full
      • and I just lowered my rentention to 24 hours
      • restarted
      • and I have seen zero change in disk size
      • I need guidance on what I need to do to clear data
      • system is processing near realtime and I dont need that much retention
      • 12 hours is more than enough
      • I have close to 110 topics
      • with 50 partitions per
      • 6 broker system
      • but I doubt that is relevant
      • I dont have log.rentention.bytes set
      • but I do have log.segment.bytes set to 0.5 gigs
      • do I need to decrease this in order to see the trigger for 24 hours
      • haha this is the team to offer advice
      • i am in a bind here
      • time*
      • I just reduced log.segments
      • SkyRocknRoll joined the channel
      • to 250 mb
      • and lowered the rentention.minutes to 5
      • hugh_jass joined the channel
      • SkyRocknRoll joined the channel
      • tbsf joined the channel
      • holmser has quit
      • tupty joined the channel
      • Hartimer has quit
      • tupty has quit
      • tupty joined the channel
      • hugh_jass joined the channel
      • Sasazuka has quit
      • Sasazuka joined the channel
      • holmser joined the channel
      • myusuf3 has quit
      • hugh_jass joined the channel
      • nicholasserra has quit
      • holmser has quit
      • robusto_ joined the channel
      • hugh_jass joined the channel
      • holmser joined the channel
      • Rob_vH
        snaps: any advice on handling un-commiting an offset with the HL consumer?