Anyone who uses Google has seen internet ads pop up that bear an uncanny similarity to something they've been searching for. But that ad-targeting technology appears to have a dark side. Google, the largest advertising platform in the world, has allowed advertisers to target racist and bigoted keywords, according to a new report from BuzzFeed News.

And Twitter appears to have a similar problem with its ad campaigns, according to a report in The Daily Beast, also published Friday. Both reports follow a ProPublica investigation that exposed the ability to target racist and anti-Semitic categories in Facebook's ad platform.

The BuzzFeed reporters discovered that Google suggests problematic keywords when certain phrases are typed in the company's ad-buying tool.

Get Breaking News Delivered to Your Inbox

When reporters typed "White people ruin ..." the tool suggested targeting internet users searching "black people ruin everything," "blacks destroy everything" and "black people ruin neighborhoods." When they typed "Why do Jews ruin everything," Google suggested running ads next to searches for "are jews evil," "jews run the world" and "jews own everything."

After BuzzFeed News notified Google about the problematic keywords, the company removed them from its advertising tool.

"This violates our policies against derogatory speech and we have removed it," a Google spokesperson told the website.

Sridhar Ramaswamy, Google senior vice president of ads, followed up with a statement saying:

"Our goal is to prevent our keyword suggestions tool from making offensive suggestions, and to stop any offensive ads appearing. We have language that informs advertisers when their ads are offensive and therefore rejected. In this instance. ads didn't run against the vast majority of these keywords, but we didn't catch all these offensive suggestions. That's not good enough and we're not making excuses. We've already turned off these suggestions, and any ads that made it through, and will work harder to stop this from happening again."

Scott Spencer, Google's director of product management in advertising, said the company took down 1.7 billion ads in 2016 that violated its advertising policies. "If you spent one second taking down each of those bad ads, it'd take you more than 50 years to finish. But our technology is built to work much faster," Spencer wrote in a statement earlier this year.

When it comes to ads on Twitter, the The Daily Beast uncovered similar pitfalls, with Twitter allowing advertisers to target millions of users who are drawn to terms like "wetback," "Nazi" and the n-word.

The report says Twitter offers a feature to target "follower look-alike" accounts, which allows customers to type in keywords like "Hitler" and "kike" in order to find users with similar interests. Some of those users included handles like "@AdolfHitler_" and "@SecretHitler."

Twitter did not immediately respond to The Daily Beast when asked why it allows customers to target these audiences.

Twitter has repeatedly come under fire in the past as a hotbed for online harassment and hate speech. It has rolled out a number of changes meant to reduce the problem, including suspending offenders' accounts and giving users more tools to block trolls. As a platform designed for rapid-fire commentary, its own CEO once admitted, "We suck at dealing with abuse and trolls."

The reports on Twitter and Google came just a day after ProPublica reported that Facebook allowed advertisers to target people who described themselves as "jew haters" or those who searched topics like "how to burn jews" and the history of "why jews ruin the world."

ProPublica contacted Facebook about what it found and the company removed the anti-Semitic categories. Facebook said it would "explore ways to fix the problem."

In a statement to CBS News, Facebook explained that the ad categories were created automatically based off information users fill out in their Facebook profiles.

"We don't allow hate speech on Facebook ... and we prohibit advertisers from discriminating against people based on religion and other attributes," Facebook's product management director Rob Leathern said. "We know we have more work to do, so we're also building new guardrails in our product and review processes to prevent other issues like this from happening in the future."