Over the past few years, waves of shocking privacy misuses, data breaches, and abuses have crashed on the world's biggest companies and billions of their users. At the same time, many countries have bolstered their data protection rules. Europe set the tone in 2016 with the General Data Protection Regulation, which introduces strong guarantees of transparency, security, and privacy. Just last month, Californians got new privacy guarantees, like the right to request deletion of collected data, and other states are set to follow.

The response from India, the world’s largest democracy, has been curious, and introduces potential dangers. An emerging engineering powerhouse, India impacts us all, and its cybersecurity or data protection maneuvers deserve our careful attention. On the surface, the proposed Indian Data Protection Act of 2019 appears to emulate new global standards, such as the right to be forgotten. Other requirements, like having to store sensitive data in systems that are located within the subcontinent, may put constraints on certain business practices and are considered more controversial by some.

WIRED OPINION ABOUT Dr. Lukasz Olejnik (@lukOlejnik) is an independent cybersecurity and privacy researcher and consultant.

One feature of the bill that’s received less inspection but is perhaps most alarming of all is that how it would criminalize illegitimate re-identification of user data. While seemingly prudent, this may soon put our connected world at greater risk.

What is re-identification? When user data is processed at a company, special algorithms decouple sensitive information like location traces and medical records from identifying details like email addresses and passport numbers. This is called de-identification. It may be reversed, so organizations can recover the link between the users’ identities and their data when needed. Such controlled re-identification by legitimate parties happens routinely and is perfectly appropriate, so long as the technical design is safe and sound.

On the other hand, if a malicious attacker were to get ahold of the de-identified database and re-identify the data, the cybercriminals would gain an extremely valuable loot. As we see in continued data breaches, leaks, or cyber espionage, our world is full of potential adversaries seeking to exploit weakness in information systems.

India, perhaps in direct response to such threats, intends to ban re-identification without consent (aka illegitimate re-identification) and subject it to financial penalties or jail time. While prohibiting potentially malicious actions might sound compelling, our technological reality is much more complicated.

Researchers have demonstrated the risks of re-identification due to careless design. Take the recent prominent case in Australia as a typical example. In 2018, Victoria’s public transport authority shared the usage data patterns of its contactless commuter cards with participants of a data science competition. The data was effectively made publicly accessible. The following year a group of scientists discovered that flawed data protection measures allowed anyone to link the data to individual commuters.

Fortunately, there are ways to mitigate such risks with the appropriate use of technology. Furthermore, to ascertain the system’s protection quality, companies can conduct rigorous tests of cybersecurity and privacy guarantees. Such tests are typically done by experts, in collaboration with the organization controlling the data. Researchers may sometimes resort to performing tests without knowledge or consent of the organization, nevertheless acting in good faith, with public interest in mind.

When data protection or security weaknesses are found in such tests, the culprit may not necessarily always be promptly addressed. Even worse, via the new bill, software vendors or system owners might even be tempted to initiate legal action against security and privacy researchers, hampering research altogether. When research becomes prohibited, personal risk calculus changes: Faced with a risk of fines or even prison, who would dare partake in such a socially useful activity?

Today, companies and governments increasingly recognize the need for independent testing of security or privacy protection layer and offer ways for honest individuals to signal the risk. I raised similar concerns when in 2016 the UK’s Department for Digital, Culture, Media & Sport intended to ban re-identification. Fortunately, by introducing special exceptions, the final law acknowledges the need for researchers working with the public interest in mind.