Last week, the New Zealand Parliament passed the Harmful Digital Communications (HDC) Act. The act was conceived over three years ago as an initiative to address online bullying among young New Zealanders. Since then, prompted by subsequent highly visible online scandals in the country, its scope widened dramatically. The final act takes on nothing less than all content on the Internet that might be deemed harmful.

But in its eagerness to prevent "harmful" communications from spreading across the Net, and by providing streamlined, non-judicial methods to remove that content, the creators of the HDC Act may end up exacerbating the very harassment it was originally intended to mitigate. Many harassers want to chase their victims offline; they use their voices to drown out and silence the online voices of others. And through its adoption of some of the worst aspects of United States Internet law, the HDC Act provides a new tool for those online harassers to enact that wish.

The HDC Act has five key elements:

It establishes a set of guiding Communication Principles under which the rest of the bill operates

It creates a government-approved agency to help Internet users who believe themselves harmed by digital communications.

It creates a new set of court orders that New Zealand's District Courts can serve against Internet hosts or authors, on referral from the Approved Agency.

It constructs a new set of civil and criminal offenses for creating or propagating "harmful digital communications"

And in its Section 20, it introduces a new 48-hour content takedown process, loosely based on the United States' DMCA takedown regime, whereby individuals can demand online hosting providers remove content they believe is harmful.

The Act also makes some amendments to existing law to clarify their applicability to digital communications, and slightly extends their reach.

Some parts of the Act's unintended consequences are fixable through the reasonable administration of the law, and secondary legislation that narrows its scope. The bulk of the Act is enforced under judicial review, which will hopefully give sufficient weight to its stipulation that both administrators and courts "act consistently with the rights and freedoms contained in the New Zealand Bill of Rights Act 1990".

The human rights perspective offered by the Bill of Rights Act is badly needed. The HDC Act’s Communication Principles illustrate the law's curious view of what legitimate online content should be. Among other things, the Principles state that digital communications should not: “disclose sensitive personal facts about an individual"; “be grossly offensive to a reasonable person in the position of the affected individual"; "make a false allegation", or "contain a matter that is published in breach of confidence."

In focusing on "harmful" communications, the Act consistently neglects all other forms of communication that might seem less than ideal, but are nonetheless protected speech. Much of human expression is in contravention of one or more of the Principles. From private disagreements to a viral video of police brutality, a whistleblower's leak or a heartfelt political call to arms: if the full spectrum of human expression happens to be expressed online, the HDC Act applies. These Principles demonstrate that New Zealand now holds digital communications to a far higher and narrower standard than any form of offline expression.

New Zealand's Bill of Rights limitation only applies to the courts and administrators of the HDC Act. In Section 20, New Zealand's lawmakers have created a novel regime that, like its DMCA 512 inspiration, involves no court or regulatory oversight. And in Section 20, New Zealand's legislators have imported an already broken process, and rendered it even more inimical to free expression.

Section 20 of the HDC: Like the DMCA, but Even Worse

First, a brief recap on how the United States' DMCA S.512 takedown regime is intended to operate:

Intermediaries, such as Google, Facebook or your local website hosting company, are, under US law, broadly protected against lawsuits over their users' uploaded content.

Their protection against intellectual property law is however conditional on them complying with the DMCA's special copyright "safe harbor" rules.

Part of those safe harbor rules kick in when an intermediary receives a request from a copyright owner to take down an infringing work uploaded by one of their users.

If the intermediary swiftly takes down the content, they are automatically protected from being themselves sued by the copyright owner. The intermediary is also protected from being sued for damages by the uploader caused by them suddenly deleting their content.

The DMCA takedown process is an attempt to create a regime where copyright law can be enforced without the courts, where copyright holders and Internet authors have balanced rights, and where hosts like Google or your ISP are freed from the responsibility of determiningthe legal accuracy of a takedown request. If intermediaries remove content as soon as a copyright holder complains, they're generally safe from legal harm.

In practice, however, the DMCA 512 regime has proven to be painfully lopsided in its incentives. The letter of the law says that takedown requests must be made in good faith under the penalty of perjury. But even with these restrictions in the law, inaccurate takedowns are rife, and are sent with no legal consequences.

It also transpires that in the real world, intermediaries are far more worried about being sued by rich and powerful copyright rights holders like music labels and movie companies, than they are concerned about legal action from their own users. This, and other problems (including users not knowing about their right to counter-notice to restore content) mean that the DMCA is frequently misused to remove content that is perfectly legitimate. EFF has documented this chilling effects of the DMCA regime for over fifteen years.

Section 20 of New Zealand's HDC attempts to map and expand this already flawed system onto a regime for the speedy takedown of all "harmful" content online. In the HDC model, complainants who have spotted content that breaches the Communication Principles and has "caused harm" can send a takedown order to the intermediary. The intermediary sends a notice of this order to the author of the allegedly "harmful" content. If the author challenges the original takedown within 48 hours, the content stays up; otherwise it is removed. If the intermediary follows these rules, they remain protected from being sued themselves over the content. In effect, it's a DMCA-like safe harbor, but for all potentially "harmful" content.

Sound reasonable? The drafters of the HDC Act clearly thought so. But in uncomprehendingly mimicking the DMCA 512 regime, they omitted what few safeguards the original has against unfettered abuse.

Unlike the DMCA, there is no penalty for misrepresentation in the HDC Act. Section 20 complainants can declare any content "harmful", and will suffer even fewer repercussions from misrepresenting their belief than the copyright industries faces for illegitimate copyright takedowns in the United States.

Unlike the DMCA, where (in theory) only the rightsholder can make a complaint, under Section 20, anyone can send a note to your New Zealand host and request your content be removed. And, under the current letter of the HDC Act, each of those millions potential complainants can demand a removal as many times as they want.

The HDC Act magnifies the well-documented flaws of the DMCA into the perfect heckler's veto. This is an ideal tool for a coordinated harassing mob—or simply a large crowd that disapproves of a particular piece of unpopular, but perfectly legitimate speech. Moreover, if the original user misses the 48-hour window to respond to a takedown order, then they will have no legal avenue to restore their deleted work.

It's worth considering exactly who might be most likely to miss that 48-hour deadline. Certainly, a vulnerable user facing hundreds or thousands of coordinated, malicious complaints might find herself quickly overwhelmed. Another type of user who might be reticent to serve a Section 20 counternotice is one who would choose not to share much personal identifying information with their hosts, and is therefore more difficult to contact by the intermediary. Vulnerable classes of Internet speakers often do this so as to limit the risk of being doxxed—they are rightly suspicious of providing third parties like hosting intermediaries with personal details, out of fear that the intermediary might accidentally reveal that to attackers, so they use pseudonymous accounts or otherwise hide their real identities. These users may not be reading the throwaway email or other contacts details they provided:if they provided any at all.

Section 20 has another flaw for this particular class of vulnerable user. The HDC Act's takedown orders and counter-notices require personal information, both from the complainant and the original publisher. A user attempting to keep his content up must hand over his "[name] and a telephone phone number, a physical address, and an email address" to the ISP. The user must also specify whether that data should be handed over to complainant. A user who checks the wrong box by mistake may find her attempt to restore her account results in personal details being handed over to the person trying to delete her works from the Internet.

Section 20 is clearly an attempt to create a streamlined way for injured users to speedily remove content that they find seriously emotionally distressing from the Internet without having to go through the slower process of judicial oversight. But given the precedent set by the DMCA, New Zealand's lawmakers should have anticipated that such a short circuit can and will be abused. Section 20's current design means that the most likely scenario for abuse is harassers either mis-using it to erase their targets from the Net, or extracting from them personally identifying information that will be used against them. It is almost engineered to be a tool for angry mobs, or doxxing attackers.

A Broken Law, Already in Place

Despite warnings from the Internet community, the HDC Act sailed through New Zealand’s Parliament with an overwhelming majority, opposed only by a small number of Green MPs and the single MP for New Zealand's free market ACT Party.

The Act received Royal Assent on Monday. While much of the HDC Act only comes into force within two years, Section 20 is already in effect. New Zealand-based intermediaries or foreign companies with offices in New Zealand must already comply with takedown requests, or risk losing their liability protections.

Like the DMCA, much of the negative consequences of the HDC Act may turn out to be hard to track. An invisible set of victims, the subject of takedowns that they do not know or are too intimidated to counter-notice against. Speech that is no longer published because intermediaries are too cautious to host it, or because potential speakers fear liability under the HDC Act.

And that, unfortunately, is the optimistic scenario. Instead, New Zealand may be entering a far more bumpy ride, with the HDC being turned into a weapon of those it was originally intended to thwart: online bullies, who want to intimidate and silence others, and are perfectly happy to abuse a well-meaning law to do so.