Wow! there is a lot of people in here =) good work!!
Is there a maintainer in here?
robstory joined the channel
tilgovi has quit
karissa
vespakoen: yeah, usually during the day!
vespakoen: although mafintosh lives in denmark and he's sometimes on at night
vespakoen: well, I guess I don't know if you're in the US or not, so maybe I'm backwards
vespakoen
I am in Berlin, and I am planning to skip this night and get into the "normal people" sleeping rhythm for the rest of the week =)
was still in "weekend sleeping rhythm" haha, anyways, thanks for the info!
I am actually writing an "issue" right now that contains the questions I have, guess that will work just fine ;)
karissa
vespakoen: okies :)
vespakoen
karissa, do you know which node version I need to use for dathub?
nvm, 0.10 works =)
karissa
vespakoen: great. you're checking out dathub?
vespakoen: i've been neglecting it recently to work on dat itself
vespakoen
I am =) going to visit :5000 as we speak
cool!
karissa
vespakoen: cool. it's pretty bare bones atm but in a good way :)
vespakoen: i hope..
vespakoen: ;)
vespakoen
Do you know what dat's stance is on storing data in multiple backends?
or if there's work related to that going on somewhere?
I mean, I have seen some scripts that take dat's stream and pipe it into elasticsearch for example, but are there plans to support this kind of functionality within dat in a "live" way ?
I am very interested in that topic (working on it myself in another project) and am looking to instead bring those efforts into dat
karissa
vespakoen: great question
vespakoen: dat has the ability to use a different backend using abstract level down
I am aware of that, but in that case you are limited to only 1 storage backend (unless there is some package that provides storing into multiple leveldb backends)
karissa
vespakoen: yeah. you'd have to make multiple dats right now
vespakoen
besides, a lot of backends require some sort of schema, which will require some more metadata
here is the project I am working on http://www.github.com/trappsnl/dstore (note: very alpha, contains bugs, is not what I want it to be yet, a lot of new work exists only on my computer at the moment)
karissa
vespakoen: oh nice!
vespakoen
the idea is, you define a schema on forehand, then "tag" the schema, which will basically create database tables, elasticsearch index type mappings etc.
then you store data with a "unified" input format, which will then get "serialized" for the specific backends
for example, this input: {"id": 1, "pin": {"type": "Point", "coordinates": [50, 5]}} will translate into {"id": 1, "pin": [50, 5]} for elasticsearch, and {"id": 1, "pin": st.geomFromGeoJSON({"type": "Point", "coordinates": [50, 5]})} for postgresql (this could also have been binary data for postgis to make it faster, but you get the point =)
Besides, whenever the schema get's updated, I diff the schema with the previous one, find all changes, store those in a log, and that allows me to then "apply" these changes to data, to transform them into data compatible with a newer or older version, this is however quite hairy to implement since there are some edge-cases, but in theory is possible to work out correctly
It is something that I very much would like to get working one day, and I hope I can bring it into "dat", or at least make it easy to work together with dat =)
mafintosh: is there a cryptographically signed version of hyperlog yet?
floppy1 has quit
floppy joined the channel
floppy has quit
AndreasMadsen joined the channel
floppy joined the channel
mafintosh
substack: i have it in a local branch. i'll push it online today
substack
:o
mafintosh
substack: it still needs some polishing/optimization though (which is why i havent published it) but it works
substack
I'm curious about the interface you've settled on
I was thinking of having a "trust" feed alongside a signed log
floppy1 joined the channel
mafintosh
substack: it doesn't use internal logs anymore - just hash links
vespakoen has quit
since that takes away some vulnerabilities inregards to log spoofing / reuse of sequence numbers
vespakoen joined the channel
floppy has quit
substack
the thing I want to build on top of a signed log is an application log for signed builds
mafintosh
since that takes away some vulnerabilities inregards to log spoofing / reuse of sequence numbers
substack
where the user would have a signed log of "trust" operations where they implicitly trust an application on first use and add the publisher to the ring for that application
and users can grant/revoke trust to the trust log (with a hyperlog-index on top)
and then the application feed is published as a signed log, where developers sign their commits but can also set other parties as trustworthy
mafintosh
substack: nice - the signing interface right now just adds an elliptic curve public key to your graph node and then it signs the node with the private key
substack
what about verification?
mafintosh
substack: and there is a hook where its up to the user wheather or not to trust a public key
substack: it just adds a .verified=true|false property currently
substack: and the node contains the public key
so you can decide if you trust that public key
it probably should just reject the node if the signature fails - i'll change that i think
substack
what about the use-case of multiple parties that can publish to the same feed?
mafintosh
that should still work
substack
like an npm module with multiple maintainers
mafintosh
substack: since the key is embedded with every node you can have multiple keys in the same feed
substack
with every node?
mafintosh
substack: currently yes since the elliptic curve keys are small
but i'm open for suggestions
substack
if there are multiple keys would there be multiple keys in each node?
or just the signing key?
mafintosh
substack: a node currently can only have one key
but you can have a node that references another node with a different key
substack
ok I think that will work for what I have in mind
mafintosh
nice
substack: next version is also gonna add support for removing head nodes from the graph
don't know if that is useful for your usecase
substack
removing them without linking to an existing head?
I don't think it will affect my current use-case but it sounds handy