Is there any way manually or otherwise to change the shard replication for old shards
Arrgh has quit
I started a dataset on a single host with replication 1. I would like to now bump it up to 3. I understand that I can do this for future shards, but that the old data will not be copied
Arrgh joined the channel
kakashi__ joined the channel
kakashi__ has quit
sszuecs has quit
em-dash joined the channel
chavito joined the channel
korya joined the channel
neersighted has quit
korya has quit
jens_norrgrann joined the channel
malfaitc1 joined the channel
malfaitc has quit
korya joined the channel
buckaroo1 joined the channel
jjmalina has quit
jens_norrgrann has quit
atgreen joined the channel
areski has quit
JahBurn joined the channel
timperrett has quit
em-dash has quit
digitalmentat joined the channel
jjmalina joined the channel
em-dash joined the channel
timperrett joined the channel
timperrett has quit
timperrett joined the channel
digitalmentat
I'm getting a "cannot find column `time`" error returned
but the time column is definitely in there when I do a `SELECT *`
without WHERE
how can I run my selects with the time range specified?
?
a few timestamps were written that were incorrect coming from the sensor (they come out to 1970...)
but influx should be able to handle those, no?
okay, I seem to get something better when I use comps against the microsecond stamp (instead of a date string)
spuder has quit
timperrett has quit
jens_norrgrann joined the channel
timperrett joined the channel
areski joined the channel
this is remarkably frustrating
em-dash has quit
AHHHH!
so much wtf
the InfluxDB interface *sometimes* queries correctly when I give it an epoch Int in the where clause
other series, a `time > '2014-11-20' stamp only works
and the python influxdb client ONLY works for SOME series and none of the above at all work for others that seem borked
:(
python client with the time > 1416958140000 (which is a timestamp returned from the influx web client)
returns ALL times
:-/
I'm really confused about the behavior
I hope someone can help me understand better
yeah, I'm specifying time in milliseconds and it's returning ALL rows for all times in the series
btashton
Have you explored the data using the web admin interface? I have not had any issue with time range queries using the pyhton interface
digitalmentat
yeah I'm doing both
a millisecond epoch timestamp WHERE time > is producing a result with times below that
buckaroo1 has quit
pauldix
digitalmentat: the timestamps are stored in microsecond scale under the hood. However, when writing or querying data, you can specify whether you're feeding in at s, ms, or u scale
digitalmentat: if doing a query like select * from foo where time > 234232343423s
basically, if it's a number in the query, put the precision at the end
digitalmentat
I did not catch that, I'll see what that does, I've tried specifying time_precision='...' in the python client to little avail though
oh wow
pauldix
digitalmentat: the issue with time behaving in a special way will be addressed in the next release. Right now, you never need to select time because it's always there
it is always returned
and you MUST select some column other than time to get any data back
digitalmentat
I'm not trying to select it but doing a range has been kind of a pain
my example query: db.query('SELECT value FROM "1e60ac62-2f78-484f-9aea-29f2d938a8eb.Temp.readings" WHERE time > 1416516812000ms")
that worked
but
bmhatfield joined the channel
my example query: db.query('SELECT value FROM "1e60ac62-2f78-484f-9aea-29f2d938a8eb.Temp.readings" WHERE time > 1416516812000", time_precision='ms')
gave me everything
even rows BEFORE that time where clause
pauldix
the time_precision parameter is only for writes
in queries the precision is specified in the query
so when you do a query where you don't specify precision it will force microseconds. Thus, you'll get everything
digitalmentat
the docs above though, while I know in `master` branch are not correct - I'll open a ticket for it
pauldix, okay, this helps
pauldix
sadly, we haven't had the resources to develop the client libraries
digitalmentat
pauldix, what about queries where I type in a string date, some work and some don't
pauldix
they've all been contributed by the community, so we don't yet have consistency on interface and with documentation
digitalmentat
that's fair (btw I think the influxdb python client -> pandas is AMAZING)
jerius joined the channel
pauldix
but, for anyone interested, we're hiring and standardizing and documenting all client libraries in one of the todo's
digitalmentat
it saved my ass when I had to transform my entire schema from a COLUMN based one to a "series.attribute.attribute" style schema
pauldix
cool
digitalmentat
can you make sure the Python version has something for Pandas? It's seriously amazing
pauldix
for sure
digitalmentat
I contributed a Pandas interface lib to TempoDB before they went enterprise so I'm really in-love that type of interface
pauldix, thank you for explaining to me, I appreciate it
pauldix
np, sorry for the frustration
digitalmentat
it's okay, A) I understand it's opensource and I'm thankful for that (we would be dead in the water if it wasn't) and B) I know you're a startup so everything's improving