is it possible from the udp output module to send only a specific field?
LJ23 has quit
LJ23 joined the channel
LJ23 has quit
LJ23 joined the channel
LJ23 has quit
LJ23 joined the channel
Itkovian joined the channel
hugh_jass joined the channel
benone has quit
sds__ joined the channel
sds_ has quit
rudgan joined the channel
publio joined the channel
volter
sds__: Perhaps if you clone the event and cut it down
benone joined the channel
sds__
volter: interesting, didn't know about the clone module
hugh_jass joined the channel
al-damiri joined the channel
rastro joined the channel
Itkovian has quit
DevRelIrcBot has quit
Itkovian joined the channel
DevRelIrcBot joined the channel
mhickey has quit
hugh_jass joined the channel
BlackCrypt0 joined the channel
MessedUpHare has quit
mhickey joined the channel
Rumbles has quit
horus joined the channel
hugh_jass joined the channel
horus has quit
Beams has quit
Itkovian_ joined the channel
Itkovian has quit
Itkovian joined the channel
gimmic_
can I use wildcards in field matching?
if [fields][type] == "syslog-*"
hugh_jass joined the channel
cyborg-one joined the channel
SalanderLives joined the channel
Looks like maybe this will work: if [fields][type] =~ /^syslog/
mhickey has quit
tgodar joined the channel
volter
gimmic_: I think there's no globbing, now. Regex certainly works.
kcas joined the channel
kcas_ has quit
speedlight joined the channel
tgodar
sigh, i've never got filebeats to come through correctly, datestamp is when it was imported, whole thing is just 'message'... just web server logs, would think it would be easy. Not sure to try and actually make it work or upgrade and start over, think I'm a version behind
Which has pretty much been my experience with ELK every time... and I started with LSF and other older things...
speedlight
hi great people! Im learning how to read the slow-query csv file in my ELK stack, what do you recommend to make it right?
rastro
tgodar: how would filebeat know what time your log entry is from, vs using the ingest time?
speedlight
mariadb slow query file sorry
tgodar
rastro, I thought at some point I'd told it to treat the thing as an apache log file
cyberRodent has quit
I'll try setting up the filter again see how it goes. I'm on 2.3.4 now?
probably should look at upgrading, 5th time is the charm
rastro
no lack of opportuntiies to upgrade, for sure...
tgodar
well I'd have abandoned if it was not improved each time also...
SkyRocknRoll joined the channel
but yeah, I'm in a pissy mood today ;)
BlackCrypt0 joined the channel
gimmic_
volter: know if regex is going to be significantly more computationally expensive?
I'm moving from a direct match ( == ) to a regex match =~ /^syslog/
so it'll check against every event
I wonder what the performance hit will be..
doing the same thing on my output now
tgodar
never use regex for what you can accomplish with a string function is usually a good rule of thumb
cyberRodent joined the channel
rastro
gimmic_: but stuff like anchoring your regexp will help, and being concise...
hugh_jass joined the channel
gimmic_
another option would be to have a bunch more redis inputs pulling from multiple keys
rastro
gimmic_: that solves your regexp problem? must have missed a redis upgrade :)
gimmic_
hmm?
I mean i could split my log types out earlier using filebeat sending to multiple redis keys(queues)
then on my LS input, have a bunch of different redis inputs
rastro
gimmic_: while tgodar is 1000% correct about string vs regexp, you'd probably need a bunch of regexps to really require a rearchitecture like that.
gimmic_
alright. Just more worried about events-per-second rate
if it is parsing 50k EPS, how much of an impact moving from string-to-regex I'll see
but the deployment is under heavy development so I don't really have a baseline anyway
volter
gimmic_: Well, you can benchmark that on a purpose-created instance, I guess.
gimmic_
Yeah, I'll add it to the list of to-dos which have a probable timeframe of never
MessedUpHare joined the channel
volter
If you are doing _any_ parsing later on, the cost is probably insignificant here.
Well, I guess that could be set up in less than 15 minutes.
But sure, time is time.
MessedUpHare has quit
gimmic_
mostly dissects
I have one single regex to kill a specific message, but everything else for this dataset is a dissect
now, the rest of the logstash instance is still running some heavier regex on other logtypes