Jay: yeah, derivequeries on the field that contains to page
contains the* page
rdw200169, ah, I see. not sure it will really be too much of an issue either way. I only have ~2GB of logs.
mbelaninja
and what is the most lightweight shipper option?
devOpsEv
mbelaninja: lumberjack
not logstash
mbelaninja
i figured as much with the jvm and all...
thx devOpsEv
failshell joined the channel
devOpsEv
yaeh, you can find lumberjack on google, just search 'lumberjack logstash'
mortini
beaver is also fairly lightweight, but needs python
devOpsEv
whack wrote that one, too
er, not beaver... I dunno about beaver, I was referring to LS
er
lumberjack
gah
mortini
heh, yeah
devOpsEv quits life
mortini goes back to obscurity
dpippen
*watching from the shadows*
mortini
stalker!
szhem joined the channel
jm-ck joined the channel
josephho_ joined the channel
rustyrazorblade joined the channel
josephho_ is now known as josephholsten
dpippen
*bamf! disappears in a tiny puff of smoke*
nstielau has quit
ohlol joined the channel
gazarsgo joined the channel
Jay
devOpsEv: Thanks, it works.. In this case, i have messages like this in the log file "130338130830-INFO: Search query: apple" "130338130830-INFO: Search query: ((apple AND iphone))" "130338130830-INFO: Search query: ((_search_y:2013) AND (apple AND Iphone))" each on separate lines, if i have to use kibana to find what is the most 5 searched keywords for that day, is it possible through derived queries or something.
blalor has quit
devOpsEv
yes, but you probably need to analyze the search query field in ES with a mapping
so that queries are tokenized
what I mean is, you don't care that apple AND iphone are done on the same search query from your users, right? you just want a total of most popular keywords?
that is to say, you don't need to know that apple and iphone were in the same query in Kibana, necessarily, right?
Jay
Yes thats right, i just want the most popular keywords
devOpsEv
yeah
Jay
in this case apple 3 and iphone 2
devOpsEv
so you probably want to tokenize those fields in ES
hm
ohlol has quit
yeah, you want to tokenize that field
and kibana will you results for each individual term in that string
so apple AND iphone would actually count apple and iphone separately
you'd just need a "-AND" in the Kibana query so it doesn't also count the 'and' in each event
blalor_afk joined the channel
hope
er
I hope I'm making sense
blalor_afk is now known as blalor
there are a couple different queries and terms and systems that I'm talking about, I udnerstand if it's a bit ambiguous what I'm saying
kjstone00 joined the channel
nstielau joined the channel
bscofield has left the channel
Jay
ok.. Got it.. I have not tokenized it since this occurs in a log with all other messages which follow different pattern , the token we use is time-severity: message , only 3 . Even if i could get "apple and iphone
" 2 and "apple" 1 thats ok
devOpsEv
in ES, you can set a mapping for that specific type, for that specific field, so that only that field is tokenized
then you don't have to worry about all other messages
bodik
help
ohlol joined the channel
devOpsEv help
sory
devOpsEv helps
devOpsEv
**
bodik
wrong window ;)
devOpsEv
hehe
oh
bodik
but thank anyway
devOpsEv
in that case
Password:
chandler1
anyone use OSSEC with logstash?
Jay
Oh ok, how to do that for specific field
also is there a way to apply regex patterns for the derive queries.. I see that * works ..
devOpsEv
Jay, you have to define the mapping in a json query and use curl to send it to ES, check the elasticsearch guide at www.elasticsearch.org for more info
Xian
match => [ "syslog_timestamp", "_ISO8601" ] how would I make this work??
devOpsEv
Xian, is that for a date filter?
Xian
yes
devOpsEv
Xian, is not working now?
Xian, for starters, _ISO8601 isn't an option there, have to remove the _
Jay
devOpsEv: Thanks .. and whether derivequeries support regex ?
Xian
typo....
devOpsEv
Jay: everything in Kibana uses ,,(lucene) query syntax
I have the same issue chandler1 had earlier with the timezones being off
abe has quit
devOpsEv
Xian, I believe it will put them in UTC
Xian
it does put everything in UTC, this is normal ?
devOpsEv
yeah
Xian
good enough for me
rashidkpc
yeah, still working on that, its complicated
rustyrazorblade has quit
its one of those things were the easy fix will just be a huge headache later
s/were/where
josephholsten has quit
Jay
devOpsEv: Thanks
logstashbot: Thanks for the link
logstashbot
Jay: Error: "Thanks" is not a valid command.
rustyrazorblade joined the channel
GregMefford joined the channel
devOpsEv
hehe
josephholsten joined the channel
rashidkpc, regex or timezones?
brdude
Is there any way to ship all the logs logStash is getting before any changes are made to another logStash box?
devOpsEv
brdude: yeah, just don't put any filters in your config
brdude: probably your best bet is redis output and setup redis on the receiving box, then use the redis input on logstash on that box
brdude
devOpsEv: I still want the original box to do it's things on the logs. I just want to have the untouched data to test out logstash 1.2 withougth going to the actual boxes any making them ship them to a second logstash box
chandler1
so you want to process locally with logstash, but also send the logs off somewhere else before hand?
brdude
yes
chandler1
how is the local logstash inputting the logs?
brdude
some tcp and some rsyslog
GregMefford
Couldn't you set up Redis to use a Pub/Sub channel and just have multiple subscribers to it?
brdude
all TCP actually
chandler1
with your current setup, you could just have rsyslog listen for logs, then send them to two destinations, one of them being your local logstash
youd have to change your logstash input ports probably
Jay has quit
Xian
any good references for a cisco filter
GregMefford
Xian: I actually joined the channel to ask about whether there would be interest in some Cisco Grok patterns I've developed.
Xian
@GregMefford yes that would be awesome
brdude
Thanks guys. Will look more into it and report back
chandler1
so you have rsyslog listen on 514, then one of your *.* destinations is localhost:50000 or something, and logstash listens on port 50000
GregMefford
I have patterns to match just about every message that I've seen coming from my ASA firewall and I was wondering whether I should contribute them to https://github.com/logstash/logstash/blob/maste... ?
GregMefford: I don't see any harm in submitting a pull request ;)
GregMefford
I was hoping someone could give feedback on how to properly package and test a set of Grok patterns, because it's working for me, but I feel like it would be more maintanable for others if there were some test cases or something.
mbelaninja has quit
samdoran
GregMefford: I would love to get a copy of those since I'm planning to pull logs from my ASAs in the very near future (as well as all my network equipment).
devOpsEv
brdude: there is a clone filter, which can clone an event and then you can tag that event so no other filters work on it, then set an output to only send events tagged with the tag, if that makes sense
brdude
debOpsEv: that actually sounds like the way to go. Thanks.
devOpsEv
GregMefford: the packaging is literally how they appear in the github, just a flat text file with PATTERN NAME (so.*me|re|gex[pattern]{0.3})
GregMefford
samdoran, Xian: Ok, I'll work on getting a pull request together (probably this weekend). LogStash has really worked wonders for making sense of ASA logs.
thansen
using the json filter, what if I want the values to be put in the @fields array just like 'kv' filter?
chandler1
devops did you ever find the solution for source_host = ip?
thansen
what would I use as target? or leave it blank?
chandler1
i see there is a beta DNS filter
devOpsEv
GregMefford, if your logs are standard for your ASA firewall then they're probably fine - most people pick apart the patterns and just use the bits they need if they have custom logs and such
dyer has quit
samdoran
GregMefford: That's very good to hear.
devOpsEv
thansen: they should automatically end up keyed to @fields
thansen, if not you may need to specify target => "@fields"