Twitter made an intriguing move earlier this week when it announced it would open its coveted blue checkmark to the public with an open application process. Although the announcement was cryptic, the company clearly wants to increase the number of verified users it has. As of Tuesday, there are only 187,000 verified accounts among Twitter’s 310 million monthly active users. So while the short-term effects appear minimal, a process to move from one of many to the vaunted minority of users could influence Twitter's ever-present harassment problem.

Verified users get more tools for filtering their notifications, and the requirement to use a real name and a photo restricts the anonymity many trolls rely on. Yet how willing is Twitter to wield verification as a weapon against user behavior? The company won't say. Its announcement remains purposefully vague about why it wants more blue checkmarks on the site, and what that even means for the service as a whole.

How willing is Twitter to wield verification as a weapon against user behavior?

The larger question is how willing Twitter is to wield any defense against abuse and harassment. "We know many people believe we have not done enough to curb this type of behavior on Twitter," the company admitted earlier this week. "We agree." These solemn reflections on its own inadequacy have grown more frequent, but the problem has stayed the same. Twitter users can still be unpleasant, nasty, hateful, and racist, in unwanted frequency toward anyone they choose with few repercussions or roadblocks.

The truth is that any real solution — verifying more users, implementing more liberal bans, or developing stronger anti-harassment tools — risks fundamentally changing the nature of Twitter. The company is not openly dedicated to any of those strategies, but has instead made small measures in every direction. As frequent Twitter critic and game developer Zoe Quinn has put it, these are "bandaids on bullet holes," and the company steadfastly refuses to admit the full scope of the problem. This isn’t about what Twitter should become, but rather that it should decide to become something — anything, really — other than what it is today.

It has to apply to chronic abusers that don't make headlines off abuse. It has to apply to everyone or it's a bandaid on a bullet hole. — Zoë Quinn (@UnburntWitch) July 20, 2016

Verification offers the clearest path forward. Since 2009, when co-founder Biz Stone announced a beta version of verified accounts, Twitter has had a sort of elite 1 percent. The introduction of the blue checkmark created a tiny minority of users who enjoy both the status symbol of being declared a Very Important Person and preferential treatment from the company's product and engineering divisions, which devise special features for those with large followings. Verified users are often celebrities, politicians, and well-known names in the worlds of media, sports, and business. They get superior iconography, and can have their voices amplified as a result.

Of course, with Twitter, an expanded spotlight usually means increased abuse, especially for women and minorities. One of the service’s core verified-only features, first introduced three years ago, is a filter system to show only interactions from other verified users. It keeps away the unwanted ramblings and harassment of random, often anonymous strangers, but it erases the voice of anyone not deemed worthy enough for a checkmark.

Twitter is blunting its effectiveness from the onset

This routine appears ready to change with the verification application process. Yet Twitter itself is blunting its effectiveness from the onset. The company says its criteria for who gets verified is staying the same. In vague terms, that means it will grant a blue checkmark if "an account may be verified if it is determined to be of public interest." That might as well say, "If we feel like it," for all it’s worth to everyday Twitter users. Twitter did not respond to multiple requests for comment regarding how its verification application process could be used to curb harassment.

One of the worst errors Twitter made was making a few anti-abuse features only available to verified users; that seems to be changing. — Anil Dash (@anildash) July 19, 2016

If Twitter is serious about fixing its issues, it could change its verification criteria to include anyone willing to submit a government-issued ID and maintain a real name, as New York Magazine's Madison Malone Kircher points out. The company can’t possibly ban every single account reported for abusive behavior, and even its most vile offenders deserving of shutdowns can always simply create new accounts. The service's block system is also easily circumvented, either by creating another handle or by logging out to see the tweets of those who have you blocked.

When comedian and Ghostbusters star Leslie Jones began openly complaining about racist and misogynistic abuse on Tuesday, it took the situation going nuclear to gain the notice of Twitter CEO Jack Dorsey, who personally messaged her. A day later, one of the central instigators in the conflict, Breitbart columnist and conservative pundit Milo Yiannopoulos, was permanently banned. These are not sustainable responses to harassment — Yiannopoulos’ ban was thought to be long overdue — and they are far from reasonable safeguards. They’re stopgaps. Twitter has said as much with similar responses in the past, with former CEO Dick Costolo telling his employees last year "we suck at dealing with abuse."

The problem is not Yiannopoulos, but his followers

Because the problem is not just Yiannopoulos, but the hundreds of thousands of followers at his beck and call and all the other anonymous accounts who use Twitter as a platform for abuse and bullying. There is little to no friction stopping people from organizing harassment campaigns and carrying them out on Twitter, with hashtags or at the direction of influential users like Yiannopoulos. These terror campaigns are no secret, just as it’s no surprise how ill-equipped Twitter is at handling them.

Of course, spreading the blue checkmark would mean altering the core DNA of Twitter, a space that has sanctioned anonymity since its founding. There are many valid reasons to either not use your real name or photo on social media, or to feel the need to use an altogether imaginary identity. Parody accounts come to mind, as do people and organizations who operate a Twitter account as a means of political expression and activism.

Identity policies also provide one of the starker contrasts between Twitter and Facebook. While controversial for its failures to take into account ethnic minority names and the LGBTQ community, Facebook’s strict adherence to real names seems to make it easier for that social network to police its worst trolls. And because it is not inherently public, but mostly a destination to converse and share with family and friends, Facebook has grown into a relatively civil social space. Twitter has no such aspirations as it stands today. Moving toward such a identity framework would mean rethinking how we all use Twitter and converse with others in an open forum.

The conflict is central both to Twitter's future and the nature of discourse on the platform

So Twitter appears stuck between implementing some safeguards that could change the free form nature of its service and continuing to passively enable the harassment of its users. It’s clear the company gets immense value out of verified accounts, especially from celebrities like Jones with large followings. But the balancing act of operating a platform of free expression (including the nasty borderline hate speech of anonymous users) and maintaining sensible and effective anti-harassment policies has always been difficult. The conflict is central to both Twitter’s future and the nature of discourse on the platform.

If Twitter fails to define a coherent strategy, it could risk letting abuse and harassment become central tenets of its network. An even more unsettling thought is that these qualities can never be excised from Twitter. The service may always contain them because it reasonably reflects how human beings feel like conversing on the internet when there no consequences.

As it stands today, Twitter has neither new consequences in mind nor a clear idea of what it wants to become. It’s the very definition of status quo.