Twitter has verified the accounts of some of the most publicly outspoken students from Marjory Stoneman Douglas High School in Parkland, Fla., where a shooter killed 17 people last week.

Survivors have taken to social media to make loud calls for gun control, which have been met with abuse and the spread of misinformation about the teenagers online. The recent move by Twitter to verify the students is yet another example of how big the current discussion around gun regulation has become, and an example of how the company hopes to add credibility to users who are at the center of a debate riddled with misinformation.

Much of the discussion about what to do in the aftermath of the mass shooting has played out on Twitter and Facebook. A number of the students who have taken to the social networks to speak out against gun violence and push lawmakers to enact stricter gun regulations have been met with resistance from conservative groups trying to discredit their validity.

Conspiracy theories have started to widely circulate claiming that some of the most visible student activists are not actually students but “crisis actors” meant to carry a message for liberals and other anti-gun lobbyists. (They are not.)

Twitter, for its part, says that it’s actively working to stop users from harassing Parkland students, and has started to add blue verification checkmarks to some of their accounts in the past 24 hours. A company spokesperson shared the following statement with Recode.

We are actively working on reports of targeted abuse and harassment of a number of survivors of the tragic mass shooting in Parkland, Florida. Such behavior goes against everything we stand for at Twitter, and we are taking action on any content that violates our terms of service. We are also using our anti-spam and anti-abuse tools to weed out malicious automation around these individuals and the topics they are raising. We have also verified a number of survivors’ Twitter accounts.

A Twitter spokesperson clarified that while the company’s terms of service do not explicitly prohibit anyone from sharing false information (or a conspiracy theory), the company is looking closely at accounts sharing this material to ensure that they aren’t violating any of Twitter’s other policies. For example, is it being shared in an attempt to be abusive? Is it being shared by a bot?

Twitter’s decision to verify Parkland students shows just how big this recent push for gun regulation has gotten. Twitter’s public verification program, which allows users to apply for a blue verification badge, has been on hold since last fall. That means Twitter proactively verified the students’ accounts on its own.

It’s easy to see why. Some students who survived the shooting have been on TV almost daily since. One of the most outspoken students, Emma González, now has more than 255,000 followers.

The meaning of Twitter’s blue verification checkmark has evolved over the years. The company was criticized last fall for verifying members of the so-called alt-right, including the white supremacist who helped organize the Unite the Right rally in Charlottesville, Va., back in August. Twitter’s verification has historically been seen as a stamp of approval rather than a simple identity verification, which is why people got upset.

You can't stop us you never will and you never can we have the strength and grit to last far longer than these politicians will that's for damn sure #midtermsAreComing #NeverAgain — David Hogg (@davidhogg111) February 21, 2018

After the backlash, Twitter updated its verification guidelines to more clearly explain that a checkmark is intended to properly identify “accounts of public interest” and to ensure that people aren’t following imposters. It’s not necessarily a company endorsement.

If you need proof that I’ve been this annoying in the political realm since day one, ask @davidrubinoff, the man who taught @charlie_mirsky and I in third grade. — Cameron Kasky (@cameron_kasky) February 21, 2018

Twitter isn’t the only company dealing with abuse and misinformation following the Parkland shooting. YouTube has removed conspiracy theory videos that made it to the top of the site’s trending section. Facebook, too, appears to be removing user accounts that are spreading misinformation. A Facebook company spokesperson did not immediately reply to a request for comment.

Sign up for the newsletter Recode Daily Email (required) By signing up, you agree to our Privacy Notice and European users agree to the data transfer policy. Subscribe