Working around early issues

Running a file sharing website on Sia is not as easy as it might sound. Sia required a lot of baby sitting and monitoring to make it work reliably. On top of that the technology is very rapidly evolving, because of this I have had to rethink and rebuild the integration a number of times. The first version that I integrated with Pixeldrain did not have the ability to download files over HTTP for example. So I had to download files to a temporary location on disk before they could be served to the user. This introduced a pretty heavy latency which would often stall the downloads for multiple minutes. In this time a user will quickly give up on the download and try some other site if it’s available. To mitigate this I had to implement caching. The way this works is that the server will wait until storage space almost runs out, then it will get a list of files which have not been downloaded for a long time and upload those to Sia. The load on Sia becomes very light because files very rarely come back into action after their initial popularity wears off, but it does happen.

In the beginning I integrated into Sia by simply calling the upload and download APIs from the Pixeldrain server. Later I started using Sia’s client library, which I found out about through a Space Duck blog post. The client library is a joy to work with, it plugs very easily into any Go program and it saves dozens of lines of code for every request type you would need.

Now that Sia 1.4.1 is out things have gotten a lot easier. I have to worry less about losing data because the repair process is faster. With renter backups and seed-based file recovery there is also a smaller chance of hardware failure causing data loss. As of writing the Sia node for Pixeldrain has 17 TB on storage contracts, and this number is still growing day-by-day.

The Sia integration now plugs into a system I call pixelstore, which takes a storage driver in the form of a Go interface and uses it to provide storage space to the Pixeldrain server cluster. Pixelstore is what allows Pixeldrain to scale over multiple servers. It moves and sorts files between other pixelstore servers in a mesh network based on popularity and geo proximity. When a server is full and a user tries to upload a file, pixelstore will move some other files to a different pixelstore in order to free up space. When a user requests a file which is not on the same server, pixelstore will request it from a different pixelstore server and delivers it to the user in realtime, like it was always there. Currently there are two pixelstore servers are in Portland, Oregon, and one in Amsterdam. I’m in the process of setting up more servers around the world so everyone will be able to enjoy Pixeldrain at the highest speed possible.

Future plans

I’m not completely sure yet what direction I want to take Pixeldrain in. Cloud storage is attractive, but a very competitive market too. I think Pixeldrain also has opportunities as a CDN, a paid file distribution platform, a dropbox-like consumer storage platform or data archival. Maybe I’ll pick a combination of these options and build something new out of it. I’m just going to explore the market as I go and build something for a few niches which I find interesting.

First of all I want to keep focusing on file sharing: making it faster, easier and safer for everyone. Shared files will get better access-control rules, better metrics (like historical bandwidth and view graphs), insight into traffic sources and an option to tip the uploader of a file in a number of cryptocurrencies (the first one will definitely be Sia).

If that goes well I want to get a little consumer cloud storage service going, but I’ll be focusing on the tech-savvy people first because that’s an easy market for me to relate to. This means tools will be largely API- and CLI-based until I find the productivity to develop an actual graphical sync client.

Then I want to expand into content distribution. I think this will work out because when I reach this point Pixeldrain’s storage network should be quite globally distributed. Because I can easily run a CDN on exactly the same tech stack as the sharing and storage systems this should not cost me anything extra. Basically the CDN plan will just be Pixeldrain cloud storage, but you pay for bandwidth instead of space, that’s all. You can not use Pixeldrain as a CDN right now because it will block downloads if they are not requested from the file download page. I had to implement this because the site was getting abused by bots. The CDN subscription plan will work around this.

These are all quite long-term features. As this is still a one-man project it might take a long time before these things will be finished. But I’m convinced anything built on Sia has a good future ahead of it. You can expect Pixeldrain to keep growing at a steady pace over the next few years. No matter how big Pixeldrain becomes, I will always put the focus on privacy and control for the users. Because someone has to do it.