eh, lunchtime I'll wander back later with a diagram
rastro
aspeer: why do you need multiple certs?
fullerja has quit
vic has quit
fullerja_ has quit
kholloway joined the channel
walterheck has quit
shafjac has quit
olauzon joined the channel
intransit joined the channel
stormpython has quit
dpp
I'm woring with 1.5.0b1, and the entire system seems to deadlock from time to time. No logged errors... just... <frozen>
working that is
any ideas or known problems?
jpodeszwik has quit
I'm running tcp inputs, and statsd and redis outputs... nothing more.
rap424 joined the channel
achan1 joined the channel
intransit has quit
intransit joined the channel
dm3 joined the channel
achan2 joined the channel
siert joined the channel
double-p_
dpp: high CPU while frozen? what's the max-jvm-size?
elfuego has quit
dpp
I'm not getting out of heap errors.. lemme check CPU. I didn't get alarms, set at 80%.
achan joined the channel
-Xmx2048m -Xss2048k
dm3 has quit
chenryn has quit
yeah, no crazy cpu spikes when they deadlocked
crazy high now that I'm processing the logs they died on
dm3 joined the channel
I have 3 servers in a loadbalanced setup, so turned on -v on one of them
chandlermelton
you tried running vmstat?
maybe cpu wait or something
achan2 has quit
dpp
ummm.. unlikely in this setup, but I have the data.
chandlermelton
:\ an LS profiler would be sweet
could show which filters are causing highest resource usage / processing time
dpp
yes!
or just export it to statsd/graphite!
whack
chandlermelton: that's coming, not sure when though
siert joined the channel
chandlermelton
ya thats one of those pie in the sky kinda things
dpp
the deadlock doesn't correspond to cpu/load. no cpu wait stats at all. no physical memory exhaustion.
intransit joined the channel
N-Mi_ has quit
chandlermelton
do most of your logs go through the same filters? maybe one of them isnt playing nice in 1.5 and breaks when it gets that 1st log that uses it
whack
dpp: how are you observing the freeze?
kholloway joined the channel
dpp
I send statsd messages as an output, and those counters stop growing. The process doesn't exit...
trying to confirm that we're not running out of sockets.
my current TCP connections count flatlines around then, but I have had it deadlock without that being the case as well.
intransi_ joined the channel
whack
dpp: so nothing is being output, that can mean many things
it oculd mean another output is stuck
bastux1 has quit
dpp
sure. the other output is redis. I'm not specifying workers, so I assume just one thread.
whack
it could mean nothing is coming into an input
"nothing being output" requires all outputs not be stuck and events actually coming into those outputs (From filters and inputs)
dpp
I know connections are being offered. What other visibility is available?
kjstone00 joined the channel
AikoM joined the channel
my one other clue is that three servers all choke at the same time.
xtruthx has quit
chandlermelton
have you tried the --debug switch?
Tetha joined the channel
dpp
I have not!
chandlermelton
run LS from cli (not service) and add your conf w/ -f and add --debug
see where it stops
dpp
how much info does -v add to things?
chandlermelton
idk
lieter joined the channel
bdpayne joined the channel
siert joined the channel
colinsurprenant joined the channel
AndroidLoverInSF joined the channel
dm3 joined the channel
dpp
will run with debug next time it exits.
kiste joined the channel
maraca has quit
jsandoval joined the channel
willejs_ has quit
jsandoval
Does anyone have a good doc on how to setup a better map in Kibana?
I would like to show all the client ips in the better map.
lucascastro has quit
TomasNunez has quit
huddy has quit
radiocats joined the channel
double-p_
jsandoval: well, you need to convert the ip-addr into geoip.coordinates
Rapture joined the channel
vgologuz has quit
jsandoval
hrmm
I see, I thought kibanana did that for me.
guess not
Any good docs out there?
double-p_
no, but logstash happily will :)
radiocats joined the channel
jsandoval
??
I am running ELK
kangguru is now known as kangguru_away
dpp
whack, what insight do you have for figuring out where the messages stop?
double-p_
jsandoval: within filter {} of the logstash config: http://pastie.org/9987917 you likely need to adapt the fieldname 'src_ip' and the path to the geoip-db