These are the contents of one of my pipeline conf files
and there is data in the kafka if I go back to the beginning
Xylakant
please use a gist/pastebin/... for pastes
darkmoonvt
You aren't getting data, or you are getting data but it's not a geo_point?
hugh_jass has quit
hugh_jass joined the channel
jamiguet
Xylakant apologies, n00b here
darkmoovt -> nothing at all in the elasticsearch index
Xylakant
jamiguet: auto_offset_reset does not start at the beginning of the kafka topic
it starts at the beginning of the topic, when no previous position is known
so if you ran that pipeline once and haven't added data to kafka (or reset the offset), this will do nothing
i'd also add an stdout output to the pipeline to see if data is actually incoming
dave____ joined the channel
dave____ has quit
antgel_ joined the channel
jamiguet
Xylakant: I setup a console on the topic no new data, so that may be a posbility
Xylakant
jamiguet: you'll also need to provide a suitable index template. the default logstash templates only apply to indices named logstash-something
jamiguet
on the other side I am having a stack trace in elastic search[2017-11-28T16:17:51,667][ERROR][o.e.x.m.e.l.LocalExporter] failed to set monitoring watch [gUywU2YuTy2a8nPQHviqdQ_xpack_license_expiration]
java.lang.IllegalArgumentException: Rejecting mapping update to [.watches] as the final mapping would have more than 1 type: [watch, doc]
Xylakant
and they expect the geopoint field to have a specific name pattern
hm. that looks watcher-related and like a bug in watcher
ok, if you specifically provided a mapping when you created the index, that would work fine
it's just fairly uncommon for logstash setups
jamiguet
I guess I started the wrong way around
It looks like there is no new data in Kafka that is why is not being picked up. I will see with my colleagues why there is no data flowing.
Xylakant
you can manually reset the offset for the logstash consumer
jamiguet
oh how?
dave____ has quit
Xylakant
I don't know about the latest kafka versions, but in 0.8 you could basically remove the consumer group from zookeeper
I'd assume that in the later versions you need to go via the kafka broker, but that's pretty much exactly what the client does: it periodically sends the last offset it processed to kafka/zk
hugh_jass has quit
and you can with the cli tooling for kafka send arbitrary values, for example "0" as offset