In Her, Spike Jonze’s latest film which won an Oscar for best screenplay, a man falls in love with his computer operating system. The OS, Samantha, has an incredible level of intelligence and general knowledge, is to all appearances sentient and boasts the voice of actress Scarlett Johansson. By most measures, she seems the perfect girlfriend for the reclusive protagonist, excepting the fact she doesn’t have a body–and even that doesn’t remain insurmountable. But of all of Samantha’s characteristics, the one you’re most likely to interact with in the not-very-distant future is the ability for computers to respond to human emotion.

The term for this technology is “affective computing” and it involves reading, interpreting, and even simulating emotion, for the purpose of interacting with and sometimes influencing human behavior. Essentially, affective computing uses pattern recognition algorithms to identify an individual’s emotional state from visual and audible cues. Such technology can distinguish whether a person is happy, sad, angry, indifferent, and so on–even if they try to disguise it. These techniques aren’t limited to faces and voices, but can also be applied to traits such as gait and posture. Because this could provide immediate, accurate information about the viewer’s response, it has powerful potential as a feedback mechanism.

Since an enormous amount of people’s communication is nonverbal, it makes sense that we’d want to tap into this to improve how computers work with us. For instance, in a recent North Carolina State University study, a computer program used video cameras to study student expressions, then indicate those students who were experiencing difficulty with the course. These technologies can draw from a wide range of visual and audible cues, many of which are undetectable by a human observer. Combined with the strengths of machine learning, these programs can already routinely outperform us.

Affective computing’s potential uses are myriad. A system that can read and respond to human behavior in fractions of a second has obvious applications in surveillance and law enforcement, though not all of these are necessarily respectful of civil liberties. Behavioral therapy programs could benefit from its feedback. Social networks will have the ability to share still another layer of information about their users. But perhaps the fields with the most commercial potential are those of marketing and advertising.

What if marketing and advertising were able to instantly self-modify on the fly based on immediate nonverbal feedback?

The ability to influence public interest and response to products, services and candidates has long been of keen interest to marketers. Up to now though, the tools used have been less than rigorous, making ad and marketing campaigns as much of an art as they are a science. But what if marketing and advertising were able to interact with their customer base on a one-on-one basis, instantly self-modifying on the fly based on immediate nonverbal feedback? Combined with other technologies such as augmented reality and Big Data, unprecedented interactiveness would be achieved and marketing’s impact could go through the roof.

So how might this work? Imagine a typical shopping district in a typical city. A fashion-conscious 20-something walks along casually window shopping. She’s wearing a pair of stylish glasses that have video display capability a la Google Glass. They’re set to translucent, augmented reality mode, allowing her to see price comparisons overlaid onto the different clothing items she’s interested in. This all takes place automatically, with the processing occurring through her smart phone.

As our shopper nears one particular store, exterior shop cameras detect her approach. Software services identify her age, gender, and demographic. Because she subscribes to a number of coupon services, one of which the store participates in, they have the ability to send offers directly to her. Another series of software services assesses her clothing style based on what she’s currently wearing, gauges her height, weight and dress size, then immediately renders a 3-D avatar that looks amazingly like our shopper. The avatar is dressed and kitted out with a jacket that’s been the shop’s biggest seller this season. The rotatable image is sent to the shopper along with the price, suggested accessories and a 20% off coupon while she’s still 30 feet from the store.