Is radical transparency the best solution to expose injustice in this technocratic world, a world that is changing faster than law can keep up with?

That question became even more relevant to me, a privacy activist, when I found myself in the Wikileaks archive, because I worked at Hacking Team 9 years ago.

The break-in at the Italian malware-producing company has been a unique event in exposing every detail of how these types of companies operate.

What followed was an effort by the international Internet community to review what that means. Privacy activists and hackers were able to dig in the enormous amount of information released and to find proof that Hacking Team was serving dictatorships.

This is a leak in the public interest, and I really feel that the personal and corporate damage is smaller than the improvement our society can gain from it. But to reach such an improvement, we have to focus on the bigger picture rather than getting distracted by the juicy details.

First, let me describe my working history and personal involvement in initiatives.

In 2006 I worked for Hacking Team. I was already a privacy activist, and my only duty in HT was consulting private Italian companies by reviewing their network security (penetration testing). Nothing to do with RCS, malware, trojans, offensive security or the like.

Please don’t mistake me for a whistleblower: I’m not going to speak about something I worked on, because I resigned from the company long before malware became the core business model of Hacking Team.

As a digital human rights defender with my background, I realized that the discussion around Hacking team is missing context and wanted to give my opinion on two topics that were raised over the last week. The first being the fundamental implications and possible abuse of power that come with the use of this technology in democratic societies. The second is the race to the bottom, where the economic value of Internet security is more important then the actual safety and security of the users and the Internet’s critical infrastructure..

This is enough to clarify my personal situation about HT, so let’s switch back to my educated opinion about what’s going on.

Public forces using secret weapons

Let’s start with the scariest implications of these technologies. This week, both technical experts and others have discerned and discussed three things: evidence planters, kill switches and backdoors.

If even one of these elements is present, nobody should trust malware as a tool of the democratic state. Many claims have been made about the presence of such hidden features, but let’s see in details what they mean.

An evidence planter can be used to implant evidence and remove traces on a victims’ device in order to fabricate evidence to be used in court. This should be the nightmare of every state which operates within the rule of law.

A kill switch gives an agent the ability to shut down a setup made for a customer. While we do not know for sure that HackingTeam has and uses them, this technology is mentioned in a number of documents as a crisis procedure. This means that if a customer violates their license (however this is defined), HackingTeam can interrupt the service. And if we do not know whether or not this is used, the malware customers (e.g. states) certainly do not know either. Imagine the consequences of a private company having more control over software and data that the state uses than the state itself.

A backdoor is a technical modification of the software. When a certain condition matches, the software works differently. This means that HackingTeam developers can potentially benefit from this knowledge in many cases. For example, if they are being monitored by their own malware, they can disable the data collection. Considering the command and control infrastructure that they were replicating for every customer, it would be possible to disable or to get access to customers’ — such as the DEA or Mexico’s secret services — investigations.

Backdoors can be built in two ways: the simplest is matching the condition and changing the behavior. However, if someone analyses the code, they can discover and understand it and use it for their own purposes.

If such a backdoor exists, it will be discovered, now that the code is leaked and under review by Internet users.

The second, and stealthier way to create a backdoor is called “bugdoor”. Is nearly invisible beside deep code audit and software testing.

The developer must be willing to weaken the code, sneaking in a vulnerability that can be used only with a deep knowledge of offensive techniques and the malware itself.

Hacking Team developers have both.

Bugdoors can be spotted too, through a deep security review and software testing procedure.