But there are limitations to voice-enabled interactions. They’re slow, embarrassing when other humans are around, and require awkward trigger phrases like “Okay, Google” or “Hey, Siri.”

Thankfully, though, talking into midair is no longer our only—or best—option.

The new iPhone introduced a camera that can perceive three dimensions and record a depth for every pixel, and home devices like the Nest IQ and Amazon's Echo Look now have cameras of their own. Combined with neural nets that learn and improve with more training data, these new cameras create a point cloud or depth map of the people in a scene, how they are posing, and how they are moving. The nets can be trained to recognize specific people, classify their activities, and respond to gestures from afar. Together, neural nets and better cameras open up an entirely new space for gestural design and gesture-based interaction models.