#logstash

/

      • divno joined the channel
      • divno
        jsvd: sorry, got disconnected.
      • Thanks for the tip
      • julienAt_ joined the channel
      • julienA__ joined the channel
      • julienAtElastic has quit
      • julienAt_ has quit
      • Guest97734 joined the channel
      • keepLearning512 joined the channel
      • keepLearning512 joined the channel
      • keepLearning512 has quit
      • keepLearning512 joined the channel
      • Darcidride joined the channel
      • keepLearning512 joined the channel
      • keepLearning512 joined the channel
      • tjerrss has quit
      • tk12 joined the channel
      • wandering_vagran has quit
      • keepLearning512 joined the channel
      • divno has quit
      • keepLearning512 has quit
      • keepLearning512 joined the channel
      • tellus83 joined the channel
      • tk12 has quit
      • matejz joined the channel
      • wandering_vagran joined the channel
      • effbiai joined the channel
      • effbiai joined the channel
      • Rumbles has quit
      • matejz_ joined the channel
      • matejz has quit
      • matejz_ is now known as matejz
      • noresult joined the channel
      • Rumbles joined the channel
      • nokiomanz joined the channel
      • darkmoonvt joined the channel
      • nokiomanz
        Hi, all. I asked a question in #elasticsearch but figured it was best to do it here. Hi all, I am upgrading my logstash from 5.6.9 to 6.2.4. For all that is filebeat, everything is fine. For all that is logstash. I get this king of error : https://pastebin.com/bfb5SEhS. I know it has to do with the removal of type in 6.x. I'm just not sure how to go at it.
      • matejz has quit
      • darkmoonvt
        You need to change your filters to only send a single type value to elastic. A mutate right before the output stanza works. If you use type elsewhere (filters, queries, reports, etc.), you'll want to preserve the original type value in another field.
      • nokiomanz
        If this help, here is how I have my input and output. https://pastebin.com/iqCqB0gr I can add the filter but there is nothing special beside some grok.
      • DevRelIrcBot_ has quit
      • DevRelIrcBot_ joined the channel
      • darkmoonvt
        If you query your index, what type does it show on the documents?
      • nokiomanz
        right now in 5.6.9 _type and type says cloudfront For that index named logstash-2018.05.29-cloudfront
      • darkmoonvt
        You might want to set document_type. It will default to 'doc'.
      • nokiomanz
        I will go read into that !
      • darkmoonvt
      • nokiomanz
        Hooo I see
      • I think it is all because I applied some logic under logstash declaring type for logstash document to apply the "correct" filter and then output to the cluster.
      • If I understand correctly now, which I did not a few minutes ago. The field type will be remove completely under 7.x. So If I stop using them right now. it could fix current and future probleme.
      • darkmoonvt
        Future problems. Right now it still has a default value, which doesn't match your data.
      • nokiomanz
        Noted. I will add the document_type to my config and upgrade back to 6.2.4 and see how it goes.
      • orlock has left the channel
      • darkmoonvt
        Good luck.
      • markeczzz has quit
      • dotans_ has quit
      • nokiomanz
        darkmoonvt, Nice it does seem to fix it! I really misunderstood that part of the upgrade process
      • darkmoonvt, Big thanks for your time and help! Now it is working. Would you still say that I should look into that usage of the type field to filter under logstash and try to "fix" that now in preparation for 7.x?
      • kaushal joined the channel
      • jsdorlus joined the channel
      • kaushal has quit
      • lexAngeles joined the channel
      • darkmoonvt, Reading again through the doc. If I was using the new multiple-pipeline from 6.0 release. I could have each log type go through its own pipeline and so I would not have to use if type to apply the correct filter.
      • Or can they only have their own .conf file per log type and have the same result. Ill need to read more !
      • or maybe simply using tags instead of type and be future proof too.
      • darkmoonvt
        Yes, that should work. It will depend on the rest of your environment.
      • (multiple input ports for different 'types' isn't an option in my case, for example.)
      • Zordrak_ joined the channel
      • nokiomanz
        I have a beat input which kind of take care of itself. Same for my S3 input, though I did apply a type to filter out. Could be a tag or a separeted pipeline. I have 2 udp input using different port to add the proper type. You say it is not an option in your case. Can I ask how you separate your log?
      • keepLearning512 joined the channel
      • darkmoonvt
        Almost all of my input is via *beats. We don't control the sources, but can require a few custom fields. We started out using the type/document_type setting in beats before we learned that the elastic type was being removed.
      • Now, we're moving to fields.type, and we route to the right parser based on that, mostly.
      • (syslog, http, snmp, netflow, argus, bro, duo, o365, etc.)
      • nokiomanz
        Noted, that is also what I am doing under beat right now. So I will keep that in mind ! Thanks a lot again for your time and for sharing!
      • darkmoonvt
        Glad to help.
      • dotans joined the channel
      • tellus83 has quit
      • Sandcrab joined the channel
      • venmx has quit
      • winem_ has quit
      • venmx joined the channel
      • revolve has quit
      • luke joined the channel
      • matejz joined the channel
      • dotans has quit
      • dotans joined the channel
      • Hatter joined the channel
      • IRC-Source_35366
        I need some help with my logstash. Is someone available to work with me?
      • julienA__ has quit
      • dotans has quit
      • dotans joined the channel
      • dotans_ joined the channel
      • Can someone tell me how to right size my heap space? I have an error about that with my flows.
      • bjorn_
        What's the error?
      • kaushal joined the channel
      • IRC-Source_35366
        [2018-05-29T00:04:34,450][ERROR][org.logstash.Logstash ] java.lang.OutOfMemoryError: Java heap space
      • dotans has quit
      • [2018-05-29T09:35:46,710][ERROR][logstash.pipeline ] Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash. {:pipeline_id=>"ppsat", "exception"=>"Java heap space", "backtrace"=>[], :thread=>"#<Thread:0x2a82ccd@/usr/share/logstash/logstash-core/lib/logstash/pipelin
      • e.rb:247 sleep>"}
      • bjorn_
        Try increasing it gradually. For logstash, increase -Xmx but you can leave -Xms alone.
      • IRC-Source_35366
        What about elasticsearch?
      • bjorn_
        What about it?
      • IRC-Source_35366
        should I increase it as well?
      • bjorn_
        Not if Logstash is the only process complaining
      • mjh has quit
      • IRC-Source_35366
        I have 5 pipelines configured and only 3 are creating indexes and none are updating.
      • kaushal has quit
      • kaushal joined the channel
      • bjorn_
        Why not?
      • IRC-Source_35366
        I don't know
      • v01t has quit
      • v01t joined the channel
      • julienAtElastic joined the channel
      • Does this mean it put the pipeline to sleep? [2018-05-29T11:57:41,183][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"elastiflow", :thread=>"#<Thread:0x8623164@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 sleep>"}
      • bjorn_
        It's probably waiting for input
      • kaushal has quit
      • IRC-Source_35366
        I can confirm with Tshark that it is getting data on the port it is supposed to be watching on
      • There are only 3 instances of java starting. Is there some magic limit to the number of workers that can be started?
      • julienAtElastic has quit
      • bjorn_
        Wouldn't think so, but I haven't worked much with separate pipelines.
      • IRC-Source_35366
        Increasing the memory has 2 of the pipelines updating. Should I add more to get the others to start??
      • Rumbles has quit
      • keepLearning512 joined the channel
      • bjorn_
        Yeah
      • kaushal joined the channel
      • IRC-Source_35366
        Ok giving it a shot lol
      • kaushal has quit
      • Dang still only start 3 java instances.
      • dotans_ has quit
      • kaushal joined the channel
      • matejz has quit
      • Guest80815 is now known as professoruss
      • Guest57604 is now known as abrotman
      • abrotman is now known as Guest20457
      • keepLearning512 joined the channel
      • kaushal has quit
      • adlar has quit
      • julienAtElastic joined the channel
      • energizer joined the channel
      • julienAtElastic has quit
      • matejz joined the channel
      • keepLearning512 joined the channel
      • wandering_vagran joined the channel
      • julienAtElastic joined the channel
      • julienAtElastic has quit
      • julienAtElastic joined the channel
      • julienAtElastic has quit
      • julienAtElastic joined the channel
      • tgm4883 has quit
      • tgm4883 joined the channel