mafintosh: what's going on with hyperdb now that it's released, are y'all at datproject building tools using it?
soyuka_ joined the channel
soyuka has quit
smaddock joined the channel
rho joined the channel
rho has quit
rho joined the channel
tyrannus joined the channel
smaddock has quit
trqx has quit
ekrion has quit
trqx joined the channel
ekrion joined the channel
tyrannus
So, after some experiences I think I have distilled what is needed to have dat working on-campus. At this stage, I believe there is a need for some centralized support
I plan on implementing the 3 components there (a off-line database a bit like hashbase.io or `dat sync` but with LDAP support), the offsite-server (easiest) and the HTTP bridge)
I also still have some doubts on how to support users that produce A LOT of dats.
dat-gitter
(martinheidegger) tyrannus: I would be very interested in a similar solution.
(martinheidegger) I found dat-daemon to be an interesting project to handle storage of dats
tyrannus
martinheidegger: could you point me to the url of dat-daemon? DuckDuckGo is not being very helpful with this
tyrannus: thanks so much, this is amazingly helpful!
tyrannus
martinheidegger, millette: I suspect that it will help a lot. I will just need to wrap it for both of my missing components (the LDAP based server and the outside server)
millette
tyrannus, dat-cli also supports registries (instead of hashbase-like) - might be simpler to hook that up.
(martinheidegger) millette actually datBase uses another project under the hood for the authentication.
millette
martinheidegger, but datBase is used with dat-cli like: "dat register <registry>", correct?
pfrazee, does hashbase support the same dat-cli (registry) commands?
pfrazee
millette: no but I've been planning on making that happen at some point
millette: I've been also planning on having beaker add (effectively) support for registries so you can autoupload to a cloud peer like hashbase
we'll need to spend some time making a spec, I haven't looked at how those APIs work at all
millette
pfrazee, I could find some time for that
dat-gitter
(RangerMauve) So, when using dat, the DHT is only used to find peers that are interested in a specific dat, not for files that can be hosted in dats, right?
(RangerMauve) pfrazee: Thanks for the link. So the tradeoff is more bandwidth spent when downloading or forking but with added security. That's awesome! I think I get what all the differences between Dat and IPFS are now.
pfrazee
@RangerMauve yeah I figure IPFS's swarm does individual blob lookup, yeah?
dat-gitter
(RangerMauve) Yeah
(RangerMauve) Well, in IPFS large files are broken into blocks and then turned into Directed Acyclic Graphs, then those blocks and graph nodes are hashed and each piece is advertised individually
(RangerMauve) That way if different files have the same blocks, they can be reused, and algorithms can be made for turning files into DAGs in such a way that changes will involve only updating the part of the DAG that changed.
(RangerMauve) In a theoretical future, if there are IPFS websites the way we have Dat websites in Beaker, any files that are the same across any websites will automatically be de-duplicated.
(RangerMauve) So forking websites would be super cheap, and sharing dependencies will be super cheap, which I guess isn't the case with Dat
or point out more (subjective) strengths of IPFS to be less biased
trqx has quit
trqx joined the channel
wking joined the channel
dat-gitter
(RangerMauve) bnewbold_: Should I post my comments here or on StackOverflow?
(RangerMauve) Jeeze, really not a fan of StackOverflow comments not allowing for multilines
(RangerMauve) I think a good summary is: Dat is more private out of the box, IPFS is more efficient for changes and deduplication. Dat has a very focused use-case which it supports well, IPFS is more general and is being used for different things and thus has less specific tools.
(RangerMauve) So far the killer features that IPFS has which Dat doesn't is pubsub and the ability to run an instance in a vanilla browser.
(RangerMauve) Though Pubsub isn't as big a deal if you know the Dat URLs for your peers already.
(RangerMauve) And Beaker Browser brings something to the table that IPFS is pretty far from achieving. It's a real implementation of the distributed web complete with authoring tools _now_. Whereas IPFS is building all this low level infrastructure but doesn't have much for the casual user except file sharing.
(RangerMauve) My ideal gateway would be wrappable in the DatArchive API from beaker to make it easy to port Beaker-based apps over to use the gateway instead
jimpick
the gateway is the glitch app ... it's just a node.js server with a websocket
if you clone the glitch app, you've cloned the gateway
dat-gitter
(RangerMauve) jimpick: What do you think about adding the gateway API to dat-desktop so that anybody installing dat-desktop can have the gateway running locally for web applications to use it. https://github.com/dat-land/dat-desktop/issues/518
jimpick
that would be cool. i was thinking it might be possible to do a BYOG = "bring your own gateway" ... maybe like how remotestorage works https://myfavoritedrinks.remotestorage.io/
if you are on a browser like beaker that has that stuff built in, then it uses that
dat-gitter
(RangerMauve) Yeah, if there's a standard for people to follow then it'll be easier to add extensions and stuff to bridge the non-dat bridge.
(nipponese) Hi, can someone help me out getting peers to connect? I am running 13.10.0. When I run the `doctor` i get `Waiting for incoming connections... (local port: 56156)`
(nipponese) but peers is always zero
vmx joined the channel
(RangerMauve) @nipponese I think Dat doctor expects you to paste a command on one of your peers, are you doing that?
millette: maybe the underlying tcp library can be used in dat someday =)
son0p joined the channel
dat-gitter
(joehand) @nipponese I think I connected to you
(nipponese) thanks
fleeky_ joined the channel
SvenDowideit
mmm, I really would like to use dat for auto-distributing the large datasets I'm making distributed container systems for, but i'm struggling to figure out what the right parts are
every time i look, there's yet another repo/project/server, none of which feel finished, or documented enough to work out which to use in what circumstance
is there someone that is able to help reduce the experimentation sprawl into something that could be used in production?