In the latest “keyword targeting gone awry” experiment, the BBC was able to use terms like “neo-Nazi” and “white supremacist” in a Twitter ad campaign, despite the social media platform’s policy that advertisers “may not select keywords that target sensitive categories.”

According to Twitter’s policies, those sensitive categories include genetic or biometric data, health, commission of a crime, sex life, religious affiliation or beliefs, and racial or ethnic origin, among others.

The BBC ran an ad and says it was able to target users who were interested in the words “white supremacist,” “transphobic,” and “anti-gay,” among others. It wasn’t clear whether the news organization was reaching users who were interested in those terms (such as for research) or people who identified as such, only noting that “Twitter allows ads to be directed at users who have posted about or searched for specific topics.”

The ad, which cost £3.84 (about $5) was only live for a couple of hours, the BBC reports, during which time 37 people saw it and two people clicked on it. A second version of the ad was targeted at users aged 13 to 24 using “anorexia,” “anorexic,” “bulimia,” and “bulimic” as keywords. It was seen by 255 users with 14 clicks before the BBC took it down. But according to Twitter’s tool, it had the potential to reach 20,000 people.

In an emailed statement to The Verge, Twitter seems to suggest the words tested by the BBC may not have been on its sensitive words list:

Twitter has specific policies related to keyword targeting, which exist to protect the public conversation. Preventative measures include banning certain sensitive or discriminatory terms, which we update on a continuous basis. In this instance, some of these terms were permitted for targeting purposes. This was an error. We’re very sorry this happened and as soon as we were made aware of the issue, we rectified it.

The company says it continues to enforce its ad policies, “including restricting the promotion of content in a wide range of areas, including inappropriate content targeting minors.”

Ad-targeting on social media platforms has come under increased scrutiny, raising questions about the potential for discrimination. ProPublica found that it was possible to run ads on Facebook that essentially discriminated against groups protected by federal law. In 2018, The Guardian found Facebook ads could be used to target users based on sensitive topics, which is in violation of since-implemented privacy laws in Europe.