Implicit in any technical process or system are the biases of those writing the code that will govern the actions of that respective technical system or process. I’m not throwing shade at developers in saying that, but rather highlighting that we all suffer from implicit biases — whether known or not — and those biases get baked into the software solutions we develop and deliver. We’ve written about it before, but I think it bears repeating because there are some pretty fascinating solutions on deck aimed at combatting some common but probably unrecognized variants of this. Namely, the interface of the future, natural language processing (NLP), is confined to binary voice characteristics unnecessarily. Enter the genderless voice AI ‘Q’.

Why do our vocal assistants’ voices matter?

Quoting a great paragraph from Mark Wilson in Fast Company:

“Voice assistants like Apple’s Siri and Amazon’s Alexa are women rather than men. You can change this in the settings, and choose a male speaker, of course, but the fact that the technology industry has chosen a woman to, by default, be our always-on-demand, personal assistant of choice, speaks volumes about our assumptions as a society: Women are expected to carry the psychic burden of schedules, birthdays, and phone numbers; they are the more caregiving sex, they should nurture and serve. Besides, who wants to ask a man for directions? He’ll never pull over at a gas station if he’s lost!”

The gender of our assistants matters a great deal because it value signals at a societal level. It also reinforces gender bias that we each might have personally. Beyond that, and looking to the future, natural language processing is the interface of tomorrow. The better and better NLP AI gets, the more and more we’ll use it to perform daily functions. Pulling out a phone and typing a question into Google isn’t necessarily natural or efficient, it’s just what we’ve gotten used to for soliciting information or winning arguments with our friends. It’d be far more preferable to just say, “Hey Siri, when was the Magna Carta drafted” and your AirPods hear you, query the web, and return with an answer directly into your ears without you pulling out your phone or drafting your opposable thumbs into service.

So if voice assistants are the interfaces of tomorrow, the manner in which they’re presented does indeed matter a great deal.

Genderless voice AI — what does it sound like?

Enter ‘Q’, the genderless voice solution. According to its website, it was developed by a “close collaboration between Copenhagen Pride, Virtue, Equal AI, Koalition Interactive & thirtysoundsgood.” To develop a genderless-sounding voice AI, according to Fast Company:

“Creators Emil Asmussen and Ryan Sherman from Virtue Nordic sampled several real voices from non-binary people, combined them digitally, and created one master voice that cruises between 145 Hz and 175 Hz, right in a sweet spot between male- and female-normative vocal ranges. To the developers, it was important that Q wasn’t just designed as non-binary, but actually perceived by users as non-binary, too. So through development, the voice was tested on more than 4,600 people identifying as non-binary from Denmark, the U.K., and Venezuela, who rated the voice on a scale of 1 to 5–1 being “male” and 5 being “female.” They kept tuning the voice with more feedback until it was regularly rated as being gender-neutral.”

Here’s Q’s introductory video:

To be fair, the companies who own the voice assistant realm aren’t sexist necessarily — they’re simply capitalists. According to Quartz, a former reporter for them, Leah Fessler…

…”has pointed out, the tech companies’ choices are driven by purely commercial motivations. “Women’s voices make more money,” she wrote in a story exploring how bots are trained to respond to sexual harassment. Indeed, research has shown that people find voices that are perceived as female “warm,” and that men and women both have a preference for women’s voices. This bias also turned up in Amazon and Microsoft’s market research, the Wall Street Journal reports.”

But, many of those tests only presented gendered options to its focus groups, and the sample sizes might not have been large enough to prove the point. The hope would be that given the option and a large enough sample size, enough users might choose Q as a preferable tonality, forcing Amazon, Google and Apple to offer some version of it in the future.