Who do we audit for?

While thinking about the purpose of auditing, we have naturally arrived to the question of the audit’s audience. Blockchain applications, in contrast with their centralised counterparts aspire to be trustless, in other words ideally there is no entity that is responsible for the correct functioning of the system, legally or otherwise. Rather, the system represents a neutral set of rules guiding interactions of independent actors. Everybody is responsible for their own actions and if the system fails, there’s nobody to blame but the user that decided to engage with the system with insufficient understanding. This difference has important implications for determining audience of smart contract security reports. Operator of a centralised system, for example a platform like Facebook bears legal responsibility for security failures and eventual damage that their users suffer, in this situation it’s natural that security reports are written mainly for their eyes and benefit, there’s no expectation that Facebook users will make assessments pertaining to the inner workings of the platform, they simply trust Facebook. On the other hand a user of a smart contract system engages at their own risk and therefore should be interested in expert assessments of the platform to make sure they do so safely and consciously. That’s why I think confidential audits on blockchain platforms go against the spirit of the blockchain movement. If we want to use impartial decentralised systems, we need impartial decentralised knowledge of these systems. Ideally the process of building this knowledge should be a collaborative community effort, but at the very least, experts should keep in mind they are doing research on behalf of current or future users, not just platform authors even if they might be currently paying for their work.

How to determine what is a security issue

Most sensitive category of issues we come across are security issues or vulnerabilities. In general, a bug that allows griefing, or intentional damage is called a vulnerability. This might seem straightforward enough, but it’s not always easy to decide what is a vulnerability and what is merely an expression of trust of one user group in another. We’ll try to make this decision a bit clearer in following paragraphs.

If a vulnerability is known to user and they nonetheless freely decide to engage with the system, it ceases to be a vulnerability and becomes a point of trust. What we mean by that, is that if a user knows about the ability of some other user to cause them damage, but decides to take part in the given system anyway, this can be interpreted as an expression of trust and the aspect of the system that puts user at risk can be called point of trust.

While hypothetically we could identify multiple points of trust, for a given pair of users, there’s always one in particular we are interested in, which is the most severe one, or the one that provides the easiest way for one user to damage another (the one with the highest griefing factor). To illustrate: Let’s say we have given our landlord a key to our house for the cases of emergency, he might also have a key to our backyard, but that’s not very relevant from security standpoint, at least until he returns the entrance key.

Now for some aspect of a system to be called a vulnerability, it has to satisfy two conditions. First it has to be unknown to the afflicted party at the moment of voluntary adoption of the system and second, it has to enable easier or more severe damage than the current point of trust.

This also means that what can be considered a vulnerability changes depending on the context, some aspects of the code that have been harmless in the past can become a liability if previous points of trust are removed when the system is updated to be made more trustless. Or when new trustless mechanisms are added to the system. That’s why with every substantial change or extension, the whole system should be reassessed.