The log has a lot of double quotes, and some symbols, but I've never run into an issue with that before. I can greedydate the whole log and not have problems.
pawnbox_ joined the channel
Eilyre joined the channel
Eilyre has quit
Eilyre joined the channel
pawnbox has quit
DevBox joined the channel
Xylakant has quit
cittatva joined the channel
Schwarzbaer has quit
cittatva
I have 4 nodes each with 1gb jvm heap; and my log indices are growing to over 7gb per day. when i go to kibana i'm getting "courier fetch: N of 50 shards failed". I'm backing up each day to s3 and deleting it from the cluster and things work ok through the morning, then that problem comes back. Do I have any options besides making my cluster bigger or putting less data in it?
es v1.4.4
torrancew
can you add heap to the nodes?
7gb per day is not query-able in fullw ith only 4gb of heap in the cluster
at least, not particularly well
kcas_ has quit
cittatva
most of the time i'm just looking at 15minutes slices
but occasionally want to search for some string for, maybe, the past 30 days
geek_cl joined the channel
derjohn_mob joined the channel
doesn't have to be fast at that scale
torrancew
hard to get around the fact that querying involves heap
hwo many days do you keep on the cluster?
szhem has quit
cittatva
for now, just a single day
but i'd like to keep 30 days
torrancew
so, yeah, you're going to need more heap
ES recommendation is to give ES 50% of your RAM as heap
(last I knew anyway)
if you're not doing that, start there
cittatva
one option that i could do is get picky-er about what i store, throw away logs that aren't errors or metrics i want to track
torrancew
sure, but first, you've got to solve your 1day problem, before you even think about the 30d one
and it would definitely make sense to start by making sure your cluster follows best practices from upstream
Eilyre has quit
rtsq joined the channel
vodka joined the channel
pfallenop has quit
danman joined the channel
Xylakant joined the channel
nomadic_travelle joined the channel
danman
In Kibana, is it possible to auto expand all the rows so there's no need to click on each one?
n0m3d joined the channel
vodka has quit
n0m3d
hey guy's i'm testing ELK 5.0-alpha and getting a problem on elasticsearch output plugin.
Soham__ has quit
Gen has quit
Currently i'm running a cluster of two nodes, and trying to push the logs with output { elasticsearch { host=>['myip'] } } but i got the error "Translation missing"
but when the ES is in localhost and i use just: elasticsearch {} works perfectly
palecur joined the channel
palecur has left the channel
kepper has quit
nat26101 has quit
have no idea whyis happening
kepper joined the channel
keith4
using the elapsed module with logstash 2.3.1, I have timeout set to 60, but I'm seeing events with elapsed_time on the order of 200,000. quite puzzling
multi_io has quit
kcas_ joined the channel
multi_io joined the channel
rtsq has quit
kepper has quit
bmoss1 has left the channel
Xylakant has quit
also confusing that the docs indicate fields will be named like elapsed.time, elapsed.match, etc., but the code clearly uses underscores, not dots
Xylakant joined the channel
torrancew
keith4: probably got missed int he recent refactor
a recent ES version enforced that no dots be in field namesa
n0m3d
guys, found my problem there was a typo ! host => should be hosts
Gen joined the channel
kepper joined the channel
nomadic_travelle has quit
radiocats joined the channel
radiocats has quit
Xylakant has quit
radiocats joined the channel
phutchins has quit
kepper has quit
Xylakant joined the channel
geek_cl joined the channel
jchoinski has quit
Rumbles joined the channel
Xylakant has quit
pawnbox_ has quit
pawnbox joined the channel
nat2610 joined the channel
pfallenop joined the channel
pawnbox has quit
Joel joined the channel
Joel
[sqs_message][pseudoReq][auditReqInfo][userSettings][attributes] -> How can I make sure this exists all the way down the path without a bunch of nested ifs?
pfallenop has quit
pfallenop joined the channel
cwyse joined the channel
flargnarf has quit
Rumbles has quit
techminer has quit
vangap has quit
jakinov_ joined the channel
kepper joined the channel
pfallenop has quit
pfallenop joined the channel
JonnyV has quit
hugh_jass has quit
jakinov_
Hey has anyone here parsed multiline logs before where each line shares a certain ID (e.g. Auditd logs)? I found a ticket on it that was marked as resolved last year...but can't seem to find any info.
hugh_jass joined the channel
allball
Joel: have you tried running multiple instances of logstash?
vangap joined the channel
Someone asked me what happens when my logstash instance shuts down. Mine doesn't. Is not shutting down a new feature?
rem5 has quit
torrancew
jakinov_: use the multiline FILTER (not codec), parse ID out into field, use that as part of stream_id for filter
rem5 joined the channel
JonnyV joined the channel
rem5 has quit
rem5 joined the channel
rem5 has quit
rem5 joined the channel
sndcrb has quit
cwyse joined the channel
jakinov_
torrancew: I'll try that out thanks
Schwarzbaer joined the channel
denisra joined the channel
cyborg-one has quit
geek_cl joined the channel
spooker_ joined the channel
dancleinmark has quit
dancleinmark joined the channel
allball has quit
denisra has quit
tlunter has quit
tlunter joined the channel
flargnarf joined the channel
dancleinmark has quit
Gen has quit
jminck joined the channel
kepper has quit
platrelefs joined the channel
mpietersen has quit
platrelefs
hey guys. having some trouble converting to a float. is there something wrong with this syntax?