Within two years there will be more voice assistants on the internet than there are people on the planet. Another, possibly more helpful, way of looking at these statistics is to say that there will still be only half a dozen assistants that matter: Apple’s Siri, Google’s Assistant, and Amazon’s Alexa in the west, along with their Chinese equivalents, but these will have billions of microphones at their disposal, listening patiently for sounds they can use. Voice is going to become the chief way that we make our wants known to computers – and when they respond, they will do so with female voices.

This detail may seem trivial, but it goes to the heart of the way in which the spread of digital technologies can amplify and extend social prejudice. The companies that program these assistants want them to be used, of course, and this requires making them appear helpful. That’s especially necessary when their helpfulness is limited in the real world: although they are getting better at answering queries outside narrow and canned parameters, they could not easily ever be mistaken for a human being on the basis of their words alone.

There is solid psychological research to show that voices of lower pitch are perceived as more authoritative by both men and women. Higher-pitched voices are associated with submissive, helpful and caring characters. These are all impressions that make the voice assistants more often used and ultimately more profitable. In the late 1990s, BMW had to withdraw and reprogram the satnav system in one of its models because German drivers complained that they did not want to take instructions from a female voice. The automated call centres for brokerage firms in Japan give stock quotes in a female voice, but confirm transactions in a male one.

In these instances, the technology adapts to pre-existing stereotypes, and so helps to perpetuate them. This will become increasingly important as children learn from their own interactions with a voice assistant. But this gendering is not inevitable. In some markets, Britain among them, Google and Siri offer a male voice for its assistant, though Alexa is always female.

Male voices are also used by assistants in Arab countries, in many of which gender equality is not a mainstream ideal. A recent Unesco report also draws attention to the counterintuitive gendering of the field of computer technology. In Arab countries, and indeed in India, there is a far higher proportion of women studying computer science than in the west. It is those countries where gender equality is most advanced that have the greatest imbalance of male over female students of computer science and, consequently, of software developers.

This was not always the case. Software was once a field in which women worked on equal terms with men. A woman, the computer scientist Grace Hopper, is said to have popularised the term “bug” – as necessary to computer science as zero is to mathematics. But within a generation, from about 1980, the profession became overwhelmingly male. These were also the years in which programming became associated with the acquisition of huge fortunes. Bill Gates may not have been a better programmer than Admiral Hopper, but he certainly ended up a whole lot richer. This imbalance is wrong, and it matters. As voice assistants move into bedrooms and kitchens and follow us around on our phones, conscious effort will be needed to eliminate prejudice that harms both men and women whether they’re building the software or using it.

• This article was amended on 25 June 2019. Siri does not only have a female voice, as an earlier version said. In addition, an earlier version implied that Grace Hopper coined the term “bug” to describe a computer glitch. That overlooked Thomas Edison’s use of the term in the 1800s to describe problems in his inventions; Hopper is said to have popularised “bug” in the 1940s, after the term was used in her logbook. This has been corrected.