Signing up for a social network account has never been easier; in few seconds and few details you will own a fully fledged account. Have you ever though about the details you fill in that process? and what goes on with it?

The easier the signing process the wider the adoption. In many cases all you need is your birth date; your mobile number maybe; your full name and your email; the rest of the details are easily obtained from your account’s activity. Like where are you logging in from or where did you sign up from and even the type of data you are uploading. From your end, you have no control over that data; I mean once you click on that button that sends the data to the service you are using, that data is off your hands. That could be worrying. You could ask all sort of questions. What is that service doing with your data? How is your data being used and what it is used for? What if someone hacks into that service/social network? What if someone steals your identity?

When you go online and search for techniques and tools you can use to protect your information and the data you submit online you will find that most of the solutions are temporary solutions like choosing a strong password. The solutions proposed don’t really solve the bigger problem which is protecting your data when you don’t own it or don’t have full access to it. However Covalent.ai decided to do things differently and approach the actual problem and take it out from its root. They decided to control the data and not control your actions. You can upload whatever data you like and make that data smart. They allow you to specify and control who can access the data, when they can access the data and for how long can the data be available. This means you can control authorization of your data and its availability.





Covalent.ai is able to achieve that through the use of Smart Policies. Smart policies are pieces of information that accompanies the data and that specifies how that data should be accessed and by who.

To ensure privacy of processing and that the smart policies are executed correctly as specified Covalent.ai opted for the Trusted Execution Environment (TEE). TEE is known for speed and its light usage of network resources. It allows for an isolated execution environment providing security on both hardware level and software level. So participating nodes in the network will not be able to access the data within or alter it.

With Covalent.ai developers can build their own dApps and to make it easier for developers Covalent.ai created a testnet that can process data offline. The testnet was built on electronjs. As mentioned earlier privacy on hardware is ensured through the use of TEE. Now that is not the only measure Covalent.ai opted for. Covalent.ai considered the cases where TEE node is compromised. As an added measure Covalent.ai decided to scale the sensitive data before processing; this means the data is changed from its original format so even if it is compromised it not understood.

When the scaling finishes the encoding process commences converting data to Covalent.ai smart contract format along with the set policy. When encoding finishes encryption starts using nacl crypto library. It does not end with encryption there is another important step which is breaking the encrypted sensitive information to 100 pieces. Those 100 pieces are logged in the data catalog blockchain. It is also worth mentioning that the 100 pieces are sent to 100 routing nodes.

For the testnet, users can upload the encrypted data to dropbox or s3 or other storage solution and through the dApp interface users can link to that data.

TEE nodes are two kinds in Covalent.ai network there is one I mentioned earlier which is the Routing node; they are 100 per pool to ensure availability and there is the Compute node which is responsible for processing. A question we can pose here is what if the user submitted a malicious code? what then? I mean you can’t just trust every piece of code uploaded by they user!

Covalent.ai has taken that into consideration. The way they protect against that is by creating a sandbox, CovaVM and a running monitoring system that ensures policy is executed correctly. CovaVM is based on OpenJDK; the reason for opting for JVM at this stage is to allow wider adoption. CovaVM is the environment that executes the code. The code or the smart policy can be written in a language called Centrifuge. The reason for this choice is the lack of proper general-purpose data policy specification language. Centrifuge utilizes the functionality of CovaVM and defines the behavior of the runtime monitors and converts the code to bytecode that CovaVM can process.

Covalent.ai is trying to take the lead in data privacy by this unique approach and they have created an architecture that ensures that. They are currently working on creating a public version; they are also constantly available and always responding to inquiries in their channels. If you are interested to know more I have added some handy resources you could use.





Resources:

Whitepaper: https://docsend.com/view/dvvb75n

Website: http://www.covalent.ai/



