#dat

/

      • ekrion joined the channel
      • mjtognetti has quit
      • jhand
        SoniEx2: the other option would be to store the metadata in a folder you can write to, but not sure if that is an option in the command line currently
      • SoniEx2
        or you could write metadata once and then allow read-only operation
      • dat share --write-metadata-and-exit and dat share --read-only
      • jhand
        SoniEx2: ah ya that seems like a good path, feel free to open an issue for that feature
      • sergi_ joined the channel
      • sergi_ has quit
      • rho has quit
      • cdaringe joined the channel
      • webdesserts has quit
      • cdaringe has quit
      • cdaringe joined the channel
      • rho joined the channel
      • cdaringe has quit
      • moet joined the channel
      • webdesserts joined the channel
      • mahmudov has quit
      • pvh has quit
      • webdesserts has quit
      • jimpick
        Anybody want to beta-test the Dat podcast that bret (@uhyeahbret on twitter) and i recorded today?
      • also dat://dat-cast.hashbase.io/
      • bret
        πŸ‘ŒπŸ™πŸ‘
      • jimpick
        we set up a #datcast channel as well
      • the first episode is mostly meta stuff ... we just wanted to ship it before the L.A. peer-to-peer web event
      • shama has quit
      • bret
        Would be fun to figure out the semantics around a dat based podcasting client / directory ecosystem
      • For now it’s going to be a vanilla podcast backed by hashbase
      • jimpick
        For sure.
      • With dat, live streaming of podcasts would even be possible.
      • rho has quit
      • todrobbins joined the channel
      • trqx has quit
      • trqx joined the channel
      • webdesserts joined the channel
      • webdesserts has quit
      • tmcw joined the channel
      • tmcw
        are updates to dats atomic? like, if i seed a dat and someone else pins it, do they immediately start seeding it, or do they seed partially-updated versions as they sync?
      • jimpick
        i think the sync protocol is granular on a hypercore record level
      • so i'm guessing that peers will seed records as soon as they have them ... peers can set options to upload only, or download only
      • tmcw
        hmm, so is there any way of knowing if a certain dat has any up-to-date peers? the dat sync commands etc don't seem to offer any information about what version peers are seeding
      • jimpick
        here's two projects you might find interesting...
      • (i just updated that to the newer version of the api)
      • that particular visualization is showing what's been downloaded locally, but the remote bitfield is also available
      • https://github.com/karissa/hyperhealth returns a list of other peers, and what records they have
      • jhand was building a UI that used that https://github.com/joehand/hyperhealth-web
      • i'm working on some new visualizations ... not completed yet though https://twitter.com/uhhyeahbret/status/98847785...
      • tmcw
        interesting, okay
      • i'm thinking less as a way of visualization and more as like a 'safe to close your laptop now' indicator
      • this whole 'keeping a dat alive' problem still seems very non-obvious in a bunch of ways - what happens if there are no more seeders, how do you know if there is any complete seeder, do you know who the seeders are
      • ideally i can either push to netlify, netlify spawns a dat process for a few minutes and closes it once my local thinkpad or aws instance has a copy
      • jimpick
        there is a 'remoteLength' value on each peer that is connected to ... if that is the same as the local length, you can assume that peer is synced up
      • tmcw
        or i do the same with my laptop, and know when there's another copy done so i can kill the dat process locally
      • jimpick
        the problem might be that you synced your latest updates and one peer says they have a full copy. but that might be on a laptop too, and they might also close their lid and leave the coffee shop. so it makes sense to sync to peers you control
      • or something like hashbase
      • tmcw
        yeah, so - is there any way to know which peers are which?
      • jimpick
        not super easily ... you can get ip addresses
      • tmcw
        without opening up hashbase on http, which sorts of defeats the purpose
      • jimpick
        i'm doing a lot of work where i sync over gateways (eg. through a websocket), so there's a lot of control that way
      • i see a lot of potential to build 'custom discovery' services which could be used for more controlled replication than just a bunch of random peers on the internet
      • eg. you might want to limit replication to just machines inside your corporate vpn, or only to your friends
      • i used to work for a startup that built systems used to store petabytes of medical data for hospitals ... the core of that wasn't much different than Dat ... i keep thinking of possible ways it could be made HIPAA compliant :-)
      • tmcw
        sure, yeah - okay, thanks - i think using remoteLength is a good first step, and i'll read through hyperhealth etc to probably build the thing that is able to wait until my dat is seeded-enoug
      • any idea about the death of dat urls? like if there are no more seeders and my laptop is eaten by a bear so there's no 'original copy' of the dat, is that dat url toast?
      • that wouldn't be the end of the world, i think - because i'd update the ./.well-known/dat, but would be a bummer for any clients that relied on that dat url containing a thing
      • todrobbins has quit
      • todrobbins joined the channel
      • jimpick
        if you lose your private key to the bear, well, you'll never be able to update it
      • todrobbins has quit
      • tmcw
        saying that i still have the secret key, though
      • jimpick
        and if nobody has a copy, that's pretty much it
      • tmcw
        i'll keep the secret key in 1password, i think
      • todrobbins joined the channel
      • todrobbins has quit
      • jimpick
        you probably need the private key, the public key, the last record number, and the offset into the hypercore
      • tmcw
        so keeping the thing alive means that you need the secret key + either (someone seeding | a local copy)?
      • jimpick
        if think if you have all those, you could write new records
      • todrobbins joined the channel
      • todrobbins has quit
      • todrobbins joined the channel
      • todrobbins has quit
      • todrobbins joined the channel
      • todrobbins has quit
      • tmcw
        okay, so how weird would it be to... check in the .dat folder into git :)
      • todrobbins joined the channel
      • jimpick
        i think it would work, but git isn't great at binaries
      • todrobbins has quit
      • todrobbins joined the channel
      • hypercore works great with filesystems that support sparse files
      • tmcw
        indeed... luckily i'm using git-lfs already for this site so perhaps that can bail me out
      • todrobbins has quit
      • todrobbins joined the channel
      • todrobbins has quit
      • todrobbins joined the channel
      • todrobbins has quit
      • todrobbins joined the channel
      • jimpick
        i run several hypercored servers, so i have multiple copies of most of my stuff. i too want to improve the observability of my peers, because there's still bugs and problems occasionally pop up with peer-to-peer replication
      • moszeed joined the channel
      • todrobbins has quit
      • tmcw
        yeah, i'm at the very least expecting my home comcast network to fail often
      • jimpick
        i'd like to see a nice "christmas tree monitoring" screen to give me confidence that all my personal stuff is widely distributed
      • tmcw
        need to buy some more thinkpads off of ebay and connect them to some more wifi networks :)
      • jimpick
        the example i think of is my wedding photos ... i keep the usb key that the photographer gave us, but i've got many, many replicas of that. i once lost 3 years of photos back around 2003 when i formatted my old laptop to give to my parents. oops
      • tmcw
        ooof, yeah, i've lost a few big backups too.
      • jimpick
        i can see this evolving so that friends automatically keep backups for their friends ... that's easy to do if the contents are encrypted
      • tmcw
        kyle was telling me about the tahoe system, which was sort of that on an ad-hoc basis, small groups of people cross-hosting each other's backups
      • jimpick
        yeah, i remember that, but i never used it. zooko is a smart dude
      • dat actually uses the hashing system he designed
      • tmcw
        i kind of love how dat has so many components of so many systems that came before it
      • jimpick
        the initial use case for dat is sharing research data ... but i understand that some more encryption features are in the pipeline
      • the 'discovery keys' keep the data encrypted from man-in-the-middle snooping, but the endpoints need to have the public keys in order to sync, so they can unpack the contents. of course, it's easy to just put an encrypted blob inside a dat
      • tmcw
        all right - well - thanks so much jim for letting me quiz you with some newbie questions! it's lateish in pdt time so i should probably head out
      • tmcw has quit
      • ekrion joined the channel
      • sergi_ joined the channel
      • ilyaigpetrov joined the channel
      • ralphtheninja2 joined the channel
      • webdesserts joined the channel
      • vmx joined the channel
      • webdesserts has quit
      • sergi_ has quit
      • soyuka_ joined the channel
      • soyuka_
        @substack may you consider to transfer random-access-idb to the random-access-storage group on github? Thanks!
      • soyuka_ has quit
      • son0p joined the channel
      • ralphtheninja2 has quit
      • webdesserts joined the channel
      • webdesserts has quit
      • rho joined the channel
      • rho has quit
      • rho joined the channel
      • moet joined the channel
      • ralphtheninja2 joined the channel
      • technil has quit
      • webdesserts joined the channel
      • technil joined the channel
      • moszeed has quit
      • pfrazee
        mafintosh: yo got a question
      • video/audio playback in beaker sometimes restarts
      • I think beaker is handling the range headers correctly
      • I think what's happening is that the dat:// handler overpromises. It says "ok here's the range you asked for" but sometimes all that data isnt downloaded yet
      • so I think what I need to do is check how much of the file is downloaded and only serve that
      • mafintosh
        pfrazee: hmm
      • pfrazee
        which means, for a given byte range n->m, I need to find out how many bytes after n are available
      • mafintosh
        pfrazee: it would just hang and download the range missing
      • unless i'm misunderstanding you
      • pfrazee
        hmm I use createReadStream
      • trqx joined the channel