(serapath) Last time, i started to share a folder and had it "uploading" until the connection dropped due to network interruption. I would like to continue the transfer.
me guess is that the first two haven't been archived
would be good to show that or remove them from the list
(if that is the case)
dat-gitter
(e-e-e) Hey I just got an error `Abort trap: 6` when trying to create the metadata for a huge dat - then when I try `dat share` again it shares only what was added and does not add any of the remaining files. Any thoughts? Its a huge dat - 10000 root folders - but with only touched files so content is quite small. Just trying to set up a test dat similar to aaaaaaaaa____:’s for playing around with modified histories. I am using
mafintosh
@e-e-e yea, there is a known issue with *lots* of folders in the root right now. fix coming soon
dat-gitter
(e-e-e) Thanks mafintosh: also just curious with the way you are handling downloaded data via bitfields in hypercore is there a simple way to getting the sum of a range. I am just about to manually iterate using `has(i)` to get a count, but I am sure there has to be a much more efficient way to get the total of downloaded chunks.
mafintosh
@e-e-e there is! i need to expose that as well
basically you can reduce most bitfield operations you do to o(log(n)) vs o(n)
dat-gitter
(e-e-e) if you let me know - I am happy to open a pr if it saves you work and is not nontrivial.
(e-e-e) super
(e-e-e) I need to look more closely at your sourcecode.
(e-e-e) mafintosh: is the trick to convert it to a buffer and operate on the binary data directly?
mafintosh
@e-e-e so in hypercore there is a bitfield prototype that does all the magic
@e-e-e see the .iterator method
dat-gitter
(e-e-e) I will have a look now.
mafintosh
that one allows you to super quickly find true/false entries in the bitfield
dat-gitter
(e-e-e) Oh awesome... thanks I will have a play and see what I can get working, in terms of exposing its functionality what are you thinking
(e-e-e) for my case it would be useful to get the total metadata downloaded - which can then be used to enable progress to be restarted if interrupted.
(e-e-e) mafintosh: I am not quite sure I completely get it, i need to read it closer, but basically the iterator on .next returns the next 0 bit in the field, right? So its a way of quickly getting which blocks are not yet downloaded. Is there an inverse for this, finding the next positive?
mafintosh
@e-e-e the index supports that, but unsure if my api does
@e-e-e was is it high level you wanna achive?
dat-gitter
(e-e-e) Yeah basically to tell how much of the metadata has been downloaded - so that when the download starts again it can proceed with a sane percentage.