Today’s update will focus on the work completed between September 2 — September 7, 2018. However, before beginning the update, Oyster has decided to move to a bi-weekly update structure, as opposed to the current weekly update system. Many of the current projects the development team is implementing to the Oyster platform are large in magnitude and will span well over a single week to integrate into the system. Because these implementations take longer to complete overall, it will be much more efficient to supply updates to the community on a bi-weekly basis.

This update is a high-level overview of many changes which are in progress or which the team will begin working on soon.

Revision 2

One change that will significantly improve the upload experience is for users to be able to resume uploads they have already started. When the user is redirected to the page with the upload in progress, the URL will have their file handle in it, so they can bookmark the URL and come back to it later. The front-end work for this change is complete; however, for the front-end logic to work, Oyster must make a small back-end change. For each file, the broker nodes must now attach metadata chunks to the tangle as soon as the user has paid for the session. The rest of the chunks will still be attached in the order they are now, but attaching the metadata chunk immediately will allow the front-end to distinguish between an upload which is to-be-processed or in-progress, but not finished (in this case the front-end will find the metadata on the tangle), or an upload which will not be processed (the metadata is not located on the tangle). This could happen if the user did not pay for the session or if some back-end error occurred.

Another significant change the team will be working on is Revision 2 (a.k.a. Rev2). Revision 2 makes some changes to Oyster’s data map generation and also changes how we encrypt the treasure payloads. Revision 2 will make it easier for users to use the Oyster protocol for streaming audio and video. Currently, the treasure is always located in a random sector in the data map. This means that for every 1,000,000 chunks in a file, there is one chunk that MUST be removed before the front-end reassembles the file. The front-end checks every chunk to make sure it’s not the treasure chunk, which adds a lot of overhead to Oyster’s file reassembly. The development team will change this so that the treasure is always located in a specific spot in each sector of the data map so that the front-end doesn’t have to perform this costly checking with many chunks. Web node treasure hunting will change slightly: the web node will already know where the treasure is, but it will NOT know the key to decrypt the treasure because the decryption key can still come from any random chunk in the data map. Revision 2 also includes another significant benefit. It closes a possible attack vector that would have allowed malicious parties to upload garbage data to the chunk addresses on the tangle, which would eventually ruin the file when users try to download it. The way Oyster will prevent this attack is the front-end will sign each chunk with a unique handle. If garbage data is attached at any address, we will be able to detect it, because that data will not be properly signed. Another benefit of Revision 2 is that the Oyster platform will no longer need to check for the oldest transactions at any given address, and instead can grab any transaction and make sure it has been appropriately signed.

For further reading on Revision 2, our community can check out the following development documents from the Oyster team:

Storing Data in S3

Oyster will soon be experimenting with storing chunk data in S3 rather than on the broker node itself. Any given broker node can be configured to either store chunk data in the badger key-value database or on S3. Using S3 will allow the development team to take a great deal of load off of the brokers and improve upload speed and efficiency. Additionally, brokers can still use badger if the owner of the broker prefers that. This change also dovetails nicely with a possibility the team has discussed in the past — making Oyster’s back-end S3 compliant. The S3 API has become a standard in the cloud storage business, and the Oyster team feels that we can appeal to more commercial customers if it’s quick and easy for people to switch between their current cloud storage provider and Oyster. The way this will most likely work is that the development team will modify the API of the broker node to match the S3 API, and modify the front-end to use this new API which will work for either the broker node or the S3 bucket. Depending on whether a particular broker is configured to store chunk data in its badger database or externally on S3, the broker node will provide the front-end with a URL to use for uploading the chunks (either an endpoint on itself, or a URL that points to an S3 bucket) and any credentials it needs to access the bucket (or dummy credentials if the broker plans to use badger). The front-end would then upload the chunks using the URL provided to it by the broker. Furthermore, the front-end would not know whether it is uploading to the broker node directly or to S3, but this will not be problematic. If the data is being uploaded to S3, the broker node will trigger AWS Lambdas which will grab the chunk data from S3 and begin attaching the chunks.

For further reading on Oyster’s S3 implementation, our community can check out the following development documents from the Oyster team: