hi there, completely new to this, have setup ELK on Centos 7
zamba
hi guys! we're running bro on machine and we want to forward logs from this instance to the ELK stack running on a different machine.. i guess we have to set up logstash forwarder on the bro machine?
puttylla
anyone knows the format to put several logs (from client servers that is) on the logstash config file at server ?
zamba: yes
zamba
puttylla: do i do any parsing on the logstash forwarder, or is all that handled by the logstash server?
puttylla
server does afaik
zamba
ok, cool
whack
bodik: we fixed that in 1.4.3
zamba
and can someone please explain the concept about indices?
puttylla
zamba: be aware I just setup this first TODAY
zamba
puttylla: with bro?
puttylla
zamba: that is a way to structure data and tag logs for allow some speed when searching otherwise it would be very slow
zamba: without, but same applies
ELK
zamba
puttylla: well.. how many indices should one have, then?
bodik
whack: fixed ?> i see different versions in repo for 1.4.3 too
whack: the reason oi'm fighting it that in 1.4.3 mongodb output is broken ;( (i know it's contrib ...)
[diecast]
what's a recommended java library for logging? we're currently using slf4j
bodik
so i wanted to fix version to 1.4.2 which works, but cann;t do it through puppet module ;(
hulu1522 has quit
hulu1522 joined the channel
electrical
bodik: uhg :(
[diecast]
oh, and log4j 1.4.2
whack
bodik: those version numbers are fine, what's the problem?
m0nky
howdy
whack
bodik: the "Version" is 1.4.3, the "release" is 1-7e387fb for logstash, 1-bf2ee84 for logstash-contrib
the release has little impact package
bodik
whack: the problem is that class { logstash: repo_version=>"1.4", version=>"1.4.2-1-xyz" }
it would install logstash, but not logstash-contrib
because build version numbers are not in sync for logstash and logstash-contrib packages
m0nky
I'm curious if anyone has ever had problems with LSF not reading other files explicitly defined in its configuration? Granted, the first file in it's config is the largest (2GB/hr) it never gets around to some of the smaller files.
whack
bodik: don't' set version that way
bodik: set version 1.4.2 or 1.4.3
not 1.4.2-1-xyz
_hugespoon is now known as hugespoon
thorrsson-work joined the channel
puttylla
quick one, anyone knows if you can use a path with wildcarding ? say I have 100 chrooted apache vhosts with each log in the custom folder
bodik
whack: i tried that too
puttylla
so I do not want to put path for each of the files (as it is impossible to mantain)
bodik
whack: same story, apt get install logstash=1.4.2 >> version does not exist
bodik: maybe this is a bad assumption in the puppet module
n1md4
I have a problem with kibana being unable to fetch mapping
whack
the release numbers should not need to be the same
the version numbers must match
Rapture joined the channel
bodik: try version => "1.4.2-*"
n1md4
I have no default index patern, but do not know what is should match
bodik
whack: it looks that i use old version of puppet module for logstash too
achan joined the channel
whack: version => "1.4.2-*" works
whack: thank you very much .. ;)
whack
woot
whack high fives bodik
avleen
afternoon!
whack
avleen: avleen !
avleen
WHACK!!
how're you doing man?
whack
hanging in there.
avleen
well, that's about all we can all do :)
zendrix joined the channel
finally managed to upgrade to ES 1.5.2 last night, so nwo I can start rolling out Kibana 4 :D
whack
wooot woot
avleen: 1.6 will help you also
wrt fast restarts
avleen
i saw that! oooooooooooo i'm so looking forward to that!
I don't think i told you this, it's been a while since we spoke, but I hacked up the date filter in an interesting way
we were getting a lot of logs with wild timestamps (mostly because clocks on things like mobile clients are completely untrustable)
and it was blowing up the number of indices we have
whack
how'd you solve it?
puttylla
I am confused, if I use rsyslog I do not need logstash forwarder necessairly right ?
farmking joined the channel
whack
puttylla: you can use rsyslog instead of logstash forwarder, yes
acchen joined the channel
avleen
so I added to the date filter, to only allow events where the timestamp is +/- 1 day from today, and to log all the rest so we can parse them later if needed
puttylla
whack: so if I sent the logs I just have to parse with elasticsearch, logstash server itself does not do anything.
?
whack
avleen: mobile client == a web browser? or etsy's app?
avleen
etsy's app
whack
avleen: any desire to have time within the app wrangeld with ntp? :P
in-app ntp hehe
avleen
which can do things like bundle log events while the user has no reception, etc
whack
makes sense
avleen
welllllllllllll funny you should mention that!
i've raised the same idea :D
it should totally be possible to fake it, right?
acchen joined the channel
poll a URL for the current time, calculate an offset from the phone's clock, boom.
puttylla
whack: disregard my comment, obviously logstash server will parse data that comes from rsyslog
avleen
whack: we also started deploying logstash 1.5 too. got it on one or two smaller clusters right now I think, and going to push to the main cluster in a few weeks.
so is the problem still that an offline device may show up days later with good-timestamped-logs but will cause your frozen indices (2+ days old) to receive new documents?
JahBurn1 joined the channel
avleen
yeah that problem would still exist. and I'm wondering if I might do better with sending those to a separate index. The volume is low enough that I could have month-based indices with day-based alises
and kibana can query multiple indices at once, so that would be ok
whack
yeah
avleen
certainly an interesting problem :)
whack
maybe having a 'stray/old' logs index that you compact weekly or something
avleen likes interesting problems
avleen
yup!
xtruthx joined the channel
whack
if older than 1 day, send to the stray-old-app-logsYYYYMMdd thing
then you don't waste your daily _optimize calls
avleen
yeahhhhhh! And give that index a bunch of aliases
derjohn_mob has quit
with YYYYMMDD
whack
+1 there
avleen
omg this is genius
whack++
rastro_
avleen: we had a 1B document day a while back. The cluster didn't handle it :(
whack high fives avleen
avleen
rastro_: I'm up to 3bn/day. We were much higher and then cut back some unnecessary logging. but we'll be at ~6bn by the end of the year I suspect.
whack
I think curator can even maintain your index aliases for this also
but not sure
avleen
i *think* it might be able to too, i know it does something with aliases.
untergeek
curator does do aliases
rolling, particularly
rastro_ loves how untergeek magically appears :)
avleen
right now I have a hacky python script which maintains some common aliases (logstash-1d, -2d, -3d... -1w, -2w, etc) to point to the right recent indices
lol!
untergeek
I am magic, or at least IRCCloud's notifications make me so
avleen: Curator will do those for you
rastro_
i picture a command center with a klaxon.
avleen
untergeek: brb throwing my hacky python away :)
untergeek
or you could use the Curator API to help, if you wanted to keep using your hacky python
avleen
dude i never want to use my hacky python, even when i just wrote it and i'm proud of it.
untergeek
:)
avleen
literally the moment it's done, I get that "urgh, I don't want to do this" feeling :)