If there’s one thing Twitter is perfectly suited for, it’s the calling out of bad actors. When someone runs across a scammer, harasser, or troll, they can quickly spread the word. (Sometimes this goes too far, but that’s another entire book.)

But what can we do when bad actors co-opt Twitter’s anti-harassment tools to protect themselves? One woman found herself banned from Twitter Tuesday after calling attention to a company that appeared to be paying for positive Yelp reviews. Twitter said her tweet broke rules against posting others’ personal information.

Rochelle LaPlante, an advocate for digital workers, spotted a landscaping company seemingly trying to buy fake Yelp reviews on Amazon’s online labor platform, Mechanical Turk. The MTurk job posting outlined compensation of 10 cents per review from existing Yelp accounts, written “as if you had hired this business and were happy with their service.” That would be a violation of Yelp’s review guidelines, which instruct businesses, “Don’t ask for reviews and don’t offer to pay for them either.”

But shortly after she raised the alarm, LaPlante found her Twitter account suspended. In a short tweetstorm, she wrote that she suspected someone associated with the business had reported her to Twitter for posting private information.

Twitter’s system let her log back into her account, but only after agreeing to delete her alleged offending tweet.

When others Twitter users—most notably tech and society commentator Zeynep Tufekci, who has nearly 200,000 followers—called attention to the ridiculous situation, Twitter Support sent LaPlante a follow-up email acknowledging the mistake. They also generously allowed that “if you would like to repost the content of the deleted tweet, we would not consider it a violation of our rules.”

And now I get this email from Twitter, saying they made a mistake… pic.twitter.com/ZvCFodKFCH — Rochelle (@Rochelle) March 1, 2016

And repost she did:

Now Yelp is now looking into her allegations against the landscaping company.

This particular case ended well, but it’s a symptom of a larger problem afflicting Twitter. Much has been written about tools the company could build for its users to report and prevent harassment, but there’s a related challenge that sometimes slips under the radar: The tools that already exist are easily abused, and sometimes serve the harassers rather than their victims.

As Tufekci pointed out in an incisive series of tweets, one-size-fits-all solutions to problems like doxing and sea lioning just don’t work. Without a system that takes context into account, it’s just as easy for the powerful to use the “report” button to silence critics as it is for victims to report legitimate abuse. And, as quickly as bots are advancing, taking context into account still requires the labor of qualified and conscientious humans. That’s expensive, and it doesn’t scale to something the size of Twitter.

“Not only is there no easy solution (especially from suspending users),” she wrote, “there is no easy, cheap & scalable solution. Can’t avoid judgment.”

Illustration by Max Fleishman