karissa: only thing we havent addressed from https://github.com/datproject/discussions/commi... is the one where the users got confused when they overwrite a row key with a file key and then the file key didnt show up in export
karissa: me and mafintosh were talking about, for beta to simplify things, to just make datasets file only or row only
karissa: basically we would make it so all data in a dataset has to comply with the same schema
karissa: and the built in schemas would be json or file, but you could supply your own
karissa: and when we do the folder-per-dataset thing we can maybe e.g. populate the folder with the schema file to make it editable
karissa: the thinking is that if we just make 'typed datasets' then there will be less conceptual confusion between rows and files
karissa: there are just datasets, and datasets have things in them, but all things have to be the same type
karissa
ogd: yeah, i'm into that.
ogd
karissa: cool. i think its the simplest thing to do for now after having tried to explain it to ppl in the workshop
mafintosh: but i think we should go forward with the 'typed dataset' thing
tilgovi joined the channel
karissa
ogd: how about putting files in a default dataset, 'blobs' maybe, like before? then at least people wouldn't have to think about it
ogd: part of the ux problem here seems to be that datasets were introduced to solve the problem of multiple table schemas
ogd: datasets are suited well for rows but it isn't super intuitive to me for files
ogd
karissa: so 'dat write' wouldnt need the -d flag most of the time
karissa
yeah
ogd
karissa: hmm yea that could work
kidna like we have a 'filesystem' dataset
and you can only put files in there
karissa
yeah
maybe later we can reintroduce attaching blobs to datasets
i still think its going to be necessary for some use cases, but its good to pair down
ogd
yea
therealk_ joined the channel
tilgovi joined the channel
therealk_ has quit
mafintosh
ogd karissa i can add that easily if we want that
ogd: then it would be a top level dat operation - like .pull/.push etc
ogd
mafintosh: can we just do it in dat cli without changing core?
mafintosh: just to test the concept for a while
tilgovi joined the channel
btw i made a new fresh 14.04 droplet for dathub.org
and im getting the eukaryota dat set up w/ taco-nginx so it can be used in the get dat workshop
but it will be easy to make new dats now and host them there, i will also set up npm
mafintosh: what would you recommend now for a cli process monitor setup? didnt you write one?
mafintosh
ogd: yea. Its called respawn
ogd
mafintosh: wheres the cli?
mafintosh
ogd: substack turned it into a cli thing called psy
ogd: i can add a simple cli for respawn as well if you need it
ogd
mafintosh: ill try out psy
mafintosh
ogd karissa just setting the dataset option to 'blobs' in write/read should do the trick
ogd
mafintosh: yea
mafintosh: thats what i was thinking
mafintosh
I can do that
ogd: i guess it should be called 'files' the dataset?
ogd
mafintosh: ya
finnp: a guy called thefinn93 is in #ipfs and has the domain finn.io, you should ask for a subdomain :)
mafintosh
the.finn.io
das.finn.io
ogd
p.finn.io
uhhyeahbret
das.finn.io++
ogd
also i know people who work at finn.no in norway
therealkoopa joined the channel
uhhyeahbret
ogd: are there any good taco use examples?
err projects using tacos that would serve as a good example
ogd
uhhyeahbret: i am using taco-nginx only right now with psy: psy start -n tester --cwd=/root/tester -l /root/logs/tester.log -- taco-nginx --name tester node index.js
uhhyeahbret
cool ill play around wit it
with*
mafintosh
ogd: when you export the 'files' dataset what should happen?
ogd: right now the export is empty since everything is a file