thanks for the pointer, I'll go search for the stable build in there
dendazen has quit
dendazen joined the channel
radiocats has quit
kjstone00 joined the channel
radiocats joined the channel
radiocats has quit
radiocats joined the channel
rdnsget joined the channel
rdnsget has left the channel
mveitas joined the channel
awheeler has quit
awheeler joined the channel
thedeeno joined the channel
thedeeno has quit
ingard__
rashidkpc: there?
cgroup has quit
thedeeno joined the channel
tomoyukis has quit
rashidkpc: i was just thinking maybe adding something like the inquisitor plugin analyzer/tokenizer tester to kibana would alleviate some of the bugreports about terms panels getting split on '-' etc.
Even though the Lucene search syntax is fairly straightforward, I can't seem to get accurate results returned from my search query
ingard__
or at least some way of showing to end users how each field is split by the analyzer, maybe it could even be something as easy as mouse over on the field picker or something
surge_
If I search for "foo bar", I will get "foo bar" results, but then I'll get results that have "foo" and some that have "bar" and some that have neither!
It's frustrating because I'm trying to demo this to a dev that needs something better than tail + grep
ingard__
tail + grep + cut/awk usually does it for me :)
surge_
It's too early to troll right now lol
ingard__
surge_: i guess you need to specify what fields you want those keywords to appear in?
and maybe use filters instead of just the search bar
joon_ joined the channel
surge_
I do, but it's just flat out not working properly. I havent set up logstash to filter out this particular message he's looking for
so unfortunately i have to use the @message field
kjstone00 has quit
ingard__
but can you not add 2 filters and search for * still?
field "must" - field:message - query"foo"
and another for query"bar"
surge_
Yeah, I guess I just thought
nstielau_ joined the channel
if i typed @message:"123 456" that I would get ONLY results with a message field that has EXACTLY that
and not like
123
and then 456
and then some with nothing
-___-
ingard__
yeah man i dunno
Andy has quit
i've more than once told my devs to use the proper @fields.username:"whatever"
surge_
I'll try stacking the search queries like you said
ingard__
instead of just searching for whatever and then they complain why they get all this other nonsense in the results
surge_
Right
I mean I would use that subfield if I could but I can't in this case =/
I guess I should upgrade. I'm using "Kibana 3 milestone pre-3"
sqlnoob has quit
freezey joined the channel
spork joined the channel
Slyder
I don't suppose anyone has an "add_field" line they use for SonicWall logs? :)
ksclarke has quit
phantasm66 joined the channel
e1mer has quit
achan joined the channel
EyePulp has quit
whack: Hey, was curious if you can give me a quick run down of how to use the "add_field" for SonicWall Logs.
radiocats has quit
e1mer joined the channel
ingard__
surge_: i'm running head from 2 weeks back or so
radiocats joined the channel
surge_
Yeah I just looked at the ES blog and saw all the new features. Gonna upgrade now
hlangeveld has left the channel
asc_ has quit
Bastian1 has quit
kjstone00 joined the channel
piavlo|2 has quit
tenaglia
one question: is lumberjack still considered "experimental"? I'm looking for a lightweight shipper supporting authn/authz for ~600 nodes.
Jaco has quit
Slyder has quit
tombar joined the channel
rcleere joined the channel
Slyder joined the channel
piavlo|2 joined the channel
ksclarke joined the channel
tombar has quit
eth00
tenaglia, i was going to ask the same thing. Looking over doing centralized logstash and logstash itself has such a big footprint as a shipper
don't know what the next best solution is
Slyder has quit
Slyder joined the channel
abe_ joined the channel
paulczar
i'm not using it in production, but I've heard that people are
Maior
I use beaver
avishai joined the channel
tenaglia
cool the SSH tunnel option
Maior: does it "resume" consuming a log file when restarted?
jbags
I used he python script on github that provides for data retention. I removed all but 90 days worth of data .. after which the log stash indexer was barfing unable to index and elastic search started to stack trace. Are you aware of any sort of problem with that script that might cause the database to corrupt? Ive since moved that entire set of data off to the side .. and restarted everything and Im not stack tracing and log
stash is indexing properly. Im curious if theres any good way to load that data back in slowly?
Maior
tenaglia: yes
jbags
but I've lost 90 days worth of logs in the meantime
tenaglia
Maior: this is something lumberjack doesn't do actually
(at least it's in the "future functional features")
for how many boxes are you using it?
Zordrak
Current lumberjack fpm build seems to miss the libs, which causes it to fail