Earlier this month, the internet got in a froth about Apple’s decision to drop the 3.5mm analogue audio jack from the new iPhone. Users took to Twitter to vent their outrage, while tech analysts, such as Paul Erickson at IHS Technology, suggested that the removal was money-driven: “It should be noted that wireless models are the highest revenue-generating products within the headphone market,” he told the Financial Times.

Further disapproval was directed at Apple’s replacement for wired earphones, the AirPod, essentially a wireless earphone and microphone – “like a tampon without a string” according to the Guardian – while the writers of US late-night talkshow Conan created a satirical Apple ad featuring the devices plopping from users’ ears to floor and being eaten by their pet dogs.

Yet, as Chris Saad, head of product at Uber, has pointed out in a post on Medium, Apple did more than launch some earbuds: “They launched a wireless microphone as well.” By which he means the day when we converse all day long with a virtual assistant similar to the one voiced by Scarlett Johansson in Her is drawing closer. “The next great mainstream interface is not going to be augmented reality or virtual reality. It’s going to be voice. It’s voice agents,” claims Saad.

You won’t be surprised to find out that, although the AirPods grabbed some headlines, Apple isn’t the first to enter what is becoming known as the “hearables” sector. In fact, in classic Apple style, it has built on or borrowed from products that have already been announced or are already on the market from tech giants such as Samsung and Sony and smaller start-ups such as Bragi and Doppler Labs.

So why all this sudden interest in our ears? “Until recently, the ears have been ignored by the tech industry,” says Darko Dragicevic of Bragi, the German creators of the Dash, “the world’s first hearable”. “The basic advantage over a smartphone or a watch is that your eyes and hands are free to act naturally.”

Here One will allow users to adjust sound much as you might adjust the contrast and apply filters to a digital image

Bragi has recently announced a partnership with IBM where it hopes to deliver the massive processing power and cognitive capacity of the Watson AI system via its devices. At the moment, it is exploring how these capabilities could be employed in the workplace. For example, maintenance workers could describe an issue, Watson recognises the problem and talks them through the solution – without their having to refer to manuals or computers, keeping their hands free for the repair. Similarly, doctors could get help with recognising rare conditions and their conversation with a patient would be recorded and saved to the cloud for their records.

If this sounds a little pedestrian compared to the all-knowing and all-feeling Samantha, that’s because it is. Creating a general purpose AI that could respond and interact seamlessly and faultlessly with complex humans and all their foibles is, as anyone who has dealt with personal assistants such as Siri or Cortana could tell you, a number of years down the line. However, creating applications with narrower scope is something that Bragi and other companies are working on.

Another early entrant into this area is Doppler Labs whose latest device the Here One – “Everything else is just a headphone” – is available to pre-order now prior to its November launch. Aside from the usual functions of earphones and the ability to interact with Siri and Google Now, the Here One will allow users to adjust sound much as you might adjust the contrast or balance and apply filters to a digital image. Doppler CEO Noah Kraft calls this “computational hearing. We’re focused on creating a system that uses machine learning to hear better than your ears”. For example, Kraft claims Here One will enable users to amplify speech and reduce unwanted noise such as that from a jet engine. “We have created a system that’s listening to your worlds for you, that’s curating in real time based on your preferences.” Kraft hopes this will encourage users to wear their in-ear devices for longer. He’s not a fan of the term “hearables”, preferring “in-ear computer” – “because a computer is something that becomes indispensable and part of your life and that is what we are trying to build”.

One potential game-changer in this field would be real-time translation services – both Doppler and Bragi are working to add babel fish-like capabilities to their devices. Latency is the stumbling block; as Kraft points out, you don’t want your exchange to “resemble a badly dubbed foreign movie”.

Closer to home, Kraft sketches out a near-future scenario where the device would recognise that a user was entering a pub for a date from GPS and calender data and automatically apply filters to amplify the human voice and discreetly send football results straight to your ear. These types of capabilities will alarm some because of concerns about data privacy but Kraft is eager to point out that “the last thing we will allow ourselves to be is an NSA system – listening to everyone”. This a point Dragicevic is keen to make too: “We don’t collect or store data – we don’t even know who you are.”

Only time will tell whether these devices become as ubiquitous as their creators hope. “Humanity has put up with computing being this external thing,” says Kraft. “We’ll look back and see an image of someone with their head in a piece of glass typing with two thumbs and wonder why we walked around our planet like that, so disengaged. We are saying the next great computing platform will happen in your ears.”