A beauty contest turned ugly quick when artificial intelligence robots acting as judges showed racial bias by picking mostly white winners. The incident is the latest A.I. foray to go wrong: last year Microsoft's artificial intelligence teenage robot "Tay" devolved into a Hitler apologist soon after being unleashed on Twitter.

For the contest created by Beauty.AI, 6,000 people in over 100 countries submitted photos to participate. Beauty.AI had five A.I. robots judge the contest looking at aesthetic qualities like wrinkles and symmetry. The A.I.s were fed photos to help build a database and algorithm to judge beauty. Despite the ostensibly race-neutral criteria, most of the 44 winners the robots chose were white.

After seeing the surprising results, Beauty.AI's chief science officer Alex Zhavoronkov said there were not enough people of color in the data. "If you have not that many people of color within the dataset, then you might actually have biased results," said Zhavoronkov, reported The Guardian. "When you're training an algorithm to recognize certain patterns…you might not have enough data, or the data might be biased." Zhavoronkov said they’d try to correct the problem for the next contest they plan to launch this fall.

Google suffered a similar racial algorithm failure last year when a Google Photos user posted that the app had identified a photo of him and his friend, both African-American, as "gorillas."