Last month, Google rolled out an updated version of its Photos app that had been divorced from Google+ and bolstered with a few slight tweaks—in particular, its ability to automatically tag photos and generate albums based on objects it identifies, including "food" and "landscapes."

https://twitter.com/jackyalcine/status/615331869266157568/photo/1

Unfortunately, that object database not only included wild animals but also conflated them with humans—specifically, on Monday, when an African-American man looked in his Google Photos collection and discovered an automatically generated album of him and his black female friend labeled "gorillas."

The affected user, computer programmer Jacky Alciné, took to Twitter to post proof of the Google Photos error—which singled out the photos of two people together, and no others, in a single, mislabeled album—along with a question: "What kind of sample image data you collected that would result in this son?"

Alciné received an official response from Google's Chief Social Architect Yonatan Zunger within an hour and a half, and it didn't mince words: "Holy fuck. G+ CA here. No, this is not how you determine someone's target market. This is 100 percent not okay." Zunger requested deeper access to Alciné's account, then promised a fix that would roll out later that evening.

On Tuesday, Zunger confirmed via Twitter that the "gorilla" label had been removed from the app's database but that the team still had work to do when the app didn't recognize a human face—and he flatly confirmed that "lots still [needs] to be done" in terms of facial recognition; he specifically called out "dark-skinned faces" in that assessment. He even said that the app once had a major bug where people of all races were automatically tagged as dogs.

The conversation included multiple apologies from Zunger, along with a strong "thank you!" statement from Alciné. When asked to comment, a Google spokesperson offered a statement to Ars: "We’re appalled and genuinely sorry that this happened. We are taking immediate action to prevent this type of result from appearing. There is still clearly a lot of work to do with automatic image labeling, and we’re looking at how we can prevent these types of mistakes from happening in the future."

We have reached out to Alciné with questions about his take on the story, and we will update this report if we get a response.