File sharing is the basis of the internet, Right now if you want to download something there are lots of services you can take advantage of like FTP, HTTP and BitTorrent.

I'm currently in the process of setting up NAS storage for backups and I'd like to host a "Public" shared folder. Not just for sharing common files but just hosting stuff I want people to download. The issue is Legality. I have some album's that are not technically out yet. I haven't stolen them I've just compiled various leaks and snippets and now and then I have friends ask if they can grab a song I played once.

If I was dumb I'd chuck these in a public FTP folder, If I was smarter I'd host it via BitTorrent but both have the issue of its possible to traceback where the file came from. What if I wanted to upload anonymously? Well, I had the idea: "I reckon anonymous BitTorrent would be a fun project" but surely it's been done before.
Yeah, there are various projects but what if we created a service to share files without actually ever storing the original file on someone's system. Upload a file to a network, Split it between hosts in randomised chunks, Store a map of the chunks somewhere and share the map between the hosts. Spread the chunks and new maps among new hosts and voila! A bunch of useless ass data spread across networks with a breadcrumb trail of how to put it back together.

This idea has its flaws, I imagine it'd be slow. Like insanely slow... Pinging between servers trying to see who has a more up to date map & where this last little chunk of data is. Similarly, I imagine there would be no way to tell if a chunk has just completely been destroyed. The advantage of this system is that its not possible to know what data is stored where and even if you did use the map you wouldn't possibly be able to hold a server owner responsible for the useless jumble of data on their system (in theory).

The process in theory

Upload process
Request Process

As shown in the request diagram the server tries to find the most optimal route using each server "Known list" of file locations. The issue is, won't this just always default to finding the largest chunk? And doesn't that defeat the whole purpose of splitting up the file so it's unrecognisable? Yeah...

Solution? Discard data but keep maps when confident that the file can still be retrieved.

Request Process (Red X Means file removed)

In theory, this process can run endlessly until the file is as tiny as it can get and similarly servers that deleted a larger section of the file might down the line received a smaller chunk.

This process requires a bunch of networking and would be slow. I'm not even sure how I'd split up a file effectively? That's all problems for me later I guess. I just needed to write this idea out, and now that I have it seems even less plausible. We'll see