#logstash

/

      • jesus__ has quit
      • venmx joined the channel
      • bfoote joined the channel
      • shpoont has quit
      • hugh_jass has quit
      • v01t has quit
      • v01t joined the channel
      • hugh_jass joined the channel
      • wgreenberg joined the channel
      • wgreenberg
        hi! I'm finding this particularly hard to google, but is there any standard way of routing logstash/kibana/elasticsearch logs into that same ELK stack?
      • some built-in config line I'm missing? or should I just use a filebeat instance to do this
      • shpoont joined the channel
      • torrancew
        well
      • it's not always desirable
      • at least not directly
      • I naively used filebeat to ship LS logs to its own ELK cluster once
      • and the result can be a sort of force multiplier if/when problems arise in the ELK pipeline
      • esp scaling problems
      • wgreenberg
        hmm, that's unfortunate but does make sense. is it typical then to just view ELK service logs the ol fashioned way?
      • darkmoonvt has quit
      • hugh_jass has quit
      • hugh_jass joined the channel
      • venmx has quit
      • rastro
        wgreenberg: i have a single-node ES cluster setup in parallel to my other clusters to manage the ES logs.
      • wgreenberg: i also run metricbeat output to that cluster.
      • torrancew
        wgreenberg: there are other approaches, too
      • eg if you can live without filtering them, filebeat could ship ELK-specific logs straight to ES -- in that case, you only have to worry about capacity planning and issues with ES itself
      • in my case, I subjected the entire pipeline to the error flow
      • filebeat (well, logstash-forwarder, but that's irrelevant) would collect them, ship them to logstash, which would parse them, send them to ES
      • separate indices could be used, shard allocation, etc
      • but, it's a thing you should design up front
      • and not just turn on