Humans lives their lives trapped in a glass cage of perception. You can only see a limited range of visible light, you can only taste a limited range of tastes, you can only hear a limited range of sounds. Them’s the evolutionary breaks.

But machines can kind of leapfrog over the limitations of natural selection. By creating advanced robots, humans have invented a new kind of being, one that can theoretically sense a far greater range of stimuli. Which is presenting roboticists with some fascinating challenges, not only in creating artificial senses of touch and taste, but in figuring out what robots should ignore in a human world.

Take sound. I’m not talking about speech—that’s easy enough for machines to recognize at this point—but the galaxy of other sounds a robot would encounter. This is the domain of a company called Audio Analytic, which has developed a system for devices like smart speakers to detect non-speech noises, like the crash of broken glass (could be good for security bots) or the whining of sirens (could be good for self-driving cars).

Identifying those sounds in the world is a tough problem, because it works fundamentally differently than speech recognition. “There's no language model driving the patterns of sound you're looking for,” says Audio Analytic CEO Chris Mitchell. “So 20 or 30 years of research that went into language modeling doesn't apply to sounds.” Convenient markers like the natural order of words or patterns of spoken sounds don’t work here, so Audio Analytic had to develop a system that breaks down sounds into building blocks, what they’re calling ideophones. This is essentially the quantification of onomatopoeia, like in the Adam West Batman series. You know, bang, kapow, etc.

Audio Analytic can then group sounds into major categories: “impulsive” sounds like glass breaking, “tonal” sounds like sirens, and “voiced” sounds like dogs barking. “You generally then describe all audio in terms of that taxonomy,” says Mitchell. “Then you can starting getting into, ‘Is it mechanical is it natural?’ And you start organizing the world in that way.” It’s a system a computer, or maybe one day a humanoid robot, could use to differentiate between certain sounds like it would with spoken language.

Touch is another complex sense you probably take for granted. The fifth sense isn’t just texture—it’s pressure and temperature, too. So recreating touch for robots is about combining a variety of sensors. (A company called SynTouch is already doing this, by the way.) “Getting all of that information is half the battle,” says roboticist Heather Culbertson, who studies haptics at USC. “Then you have to teach the robot what to do with that information. What does that information mean?”

It turns out your body ignores a whole lot of touch stimuli. You don’t typically feel the clothes rubbing against your body all day, and if you’re sitting comfortably, you don’t feel the pressure of sitting. Your body acclimates to avoid sensory overload.

“Robots would require a lot of computational power in order to do all of this processing, if we're talking about full-body sensing,” says Culbertson. “We would have to teach robots not only how to process the data, but how to ignore stuff that is no longer important.”

Giving robots a sense of touch will be important not just for our safety (you don’t want a surgery bot crushing your skull, for example) but for the robots themselves. “If you start to have robots in the home and you can have them around a stove, then you're going to want to have temperature sensors so you don't start melting your robot,” says Culbertson.