Just learned about pic related

Just learned about pic related
Why aren't we all using this shit? It looks like the best thing since bittorrent

Other urls found in this thread:

ipfs.io/
ipfs.io/docs/getting-started/
ipfs.io/ipfs/QmZuRNbxdNBNVEj4BeP2jn1mqFu33FgxYZ5oUfjQLDiGnT
github.com/ipfs/ipfs/blob/master/ROADMAP.md
twitter.com/SFWRedditImages

because the average fa/g/got wouldnt understand its significance.

why don't you educate us then

or you could do so yourself: ipfs.io/

I've tried it. It looks cool, but until more people are actually using it, there's no point

It's a distributed web server.
You put content there and its replicated to other peers.
It's impossible to censor or remove content after it's published. A pedos wet dream

it's never going to become significant if people don't use it

>A pedos wet dream
It's doesn't protect your privacy, so it's pretty much the same as using tor over bittorrent -- a terrible idea.

it needs some type of simple gui if it plans on getting normal people to use it.

it seems the only one it has now is a javascript web gui.

I think they can get away with a web only gui, but they have to make it incredibly simple and easy to access/starts up automatically.

>it needs a gui
>it only has a gui
what are you smoking

You're thinking of bittorrent over tor, not tor over bittorrent.
And no, it nothing like it, excepting being a terrible idea.
You can use it over tor, anyway

>they have to make it incredibly simple and easy to access/starts up automatically
possibly, but I really hate the idea of using my browser to use it.

an actual fleshed out desktop gui, preferably made in C/C++ or C#. not a front end for your web browser.

NGL, browser based apps are the future.

At my job we have to create data visualization and interpolation products and we're completely moving to web based clients because supporting different hardware platforms is just too resource consuming for us.

Its much easier to just make sure the product works on a few browsers and spend time working on actual science rather than debugging dogshit hw problems.

Making simple applications like this function over a web server is the best way to go about doing this. Why write a ton of different clients and maintain Win/Winx64/MacOS/Linux/Linuxx64 when apache just works everywhere? It's not like anything IPFS does is going to be stressing the hardware at all where it needs to be written in C or some shit.

In theory, this allows you to just spin up a VM if you're a power user and run the gateway on your LAN, although last I tried using IPFS, this wasn't working for some reason. The web site only worked if the server was running on the local machine.

Do they have their own API, or can you run server side stuff like PHP and Perl.

I did get into ZeroNet at one time, but nothing is going to go mainstream until they figure out how to get a p2p LAMP server.

Once we get a p2p LAMP server, then stuff will really kick off.

i installed ipfs

what now
how do i access the content
is there some kind indexer? how do i find content?

ipfs.io/docs/getting-started/

>It's impossible to censor or remove content after it's published. A pedos wet dream
So what happens when it runs out of space?

doesn't exactly answer all my questions.
(which I'd figure the point of this thread would be)

Someone posted this
ipfs.io/ipfs/QmZuRNbxdNBNVEj4BeP2jn1mqFu33FgxYZ5oUfjQLDiGnT

It's a directory listing of some files. I pinned this hash. Does this mean I'm one of the peers responsible for hosting this content? To be clear, none of this is actually hosted on "ipfs.io/ipfs/", correct? Does this mean I could make a simple file hosting site where all the content is on ipfs, and users can then use any browser to download my content without needing ipfs?

>p2p LAMP

Have you been huffing glue?

that's a proxy link, that site downloads the content from ipfs and forwards it over http

>Does this mean I could make a simple file hosting site where all the content is on ipfs, and users can then use any browser to download my content without needing ipfs?
yes

thanks.
How does pinning work then? The file isn't stored locally yet if I pin it, peers can still download the file from me..?

hm, just read that it's stored in /datastore/

however after I pinned a large file, I didnt see this folder.

so I checked the daemon, and sure enough, an error occured.

" ERROR flatfs: too many open files, retrying in %dms0"

So, I tried doing ipfs daemon --manage-fdlimit and pinning the hash again.

It pinned, but I still don't see this datastore folder. how can I check its been pinned?

>no responses.

and this is why it'll be ages before its relevant.

pinning a file just marks it as "to be kept cached permanently"
downloading/accessing a file causes it to be cached and seeded for as long as the data store hasn't reached its limit automatically

Because 1. The userbase-attracting components (web extension for URI handling, separate GUI app) is not yet complete and 2. It really isn't ready for mass adoption yet because they haven't completed the scalability & reliability tests, as well as several other critical components.

You can see their progress here in a nicely-formatted checkboxes list
github.com/ipfs/ipfs/blob/master/ROADMAP.md

There were shitload of threads about this couple of months back, lots of shit shared, look in archives

i'm waiting for them to create a datastore backend that can use regular files as they are, rather than making a second special copy of them