This is where the field of “affective computing” comes in. Mark Stephen Meadows is an AI designer, artist, and author working in the field. Currently the president of Botanic, a company that provides language interfaces for conversational avatars, he has recently been moving into the field of “redundant data processing.” These programs help a computer interpret conflicting information from multiple sensory sources—the kinds of conflicts that are relatively easy for a human to interpret but can easily confound a machine, like the raised eyebrows and sliding voice that make it clear a person is being sarcastic when he sarcastically says, “Sure, sounds great.”

Meadows is excitable. Within minutes of the first time I spoke with him last summer, he seemed ready to start creating a flirting AI. He reassured me that the text part would be easy. “You would just need a large corpus of chats from somewhere where people were flirting! I could train it from there.” The greater challenge would be teaching an AI to interpret everything else. Even there, however, Meadows seemed undaunted.

“What we would do is take a mobile device,” he continued. “We have cameras and microphones looking at the user’s face, taking into account lighting, identifying the shape. Then we can ask, ‘Does this look the way most faces look when they are smiling?’” The AI could take input from the face and voice separately. Having registered each, it could cross-reference them to generate a probabilistic guess regarding the mood of the user. In order to counteract the effects of covert signaling, you would have to find a way to weigh the strength of the verbal signal (“Want to get coffee?” versus “Want to fuck?”) with previous interactions and nonverbal signals—posture, tone of voice, eye contact.

Like Hanson, Bugaj, and the Wilcoxes, Meadows stressed that an AI would have to reflect the mood of its user back to him or her. “We trust people who look like us, we trust people who behave like us more quickly,” he said. “Human-avatar interaction is built on that trust, so emotional feedback is very important.” He warned that the requirement to reflect was deeply problematic. “We are instilling all of our biases in them,” he said. He told me about a demonstration of a virtual healthcare assistant built by another designer, for which the test subject was a soft-spoken African-American veteran suffering from ptsd. Meadows described a cringe-inducing exchange.

“Hi, my name is ____,” the veteran said.” Hi, ____,” the bot replied. “So how are you feeling today?” “Depressed. Things are not going well.” “Do you have friends you can talk to?” “There is nobody I have.” “Gee, that’s great.”

“It was misreading everything,” Meadows recalled. “It was a result of the robot not looking like him.” Meadows corrected himself. “Not being built to look at people like him.” The robot itself was just a graphic on a screen. “By the end, that man really despised that robot.”

The more people I spoke to, the clearer it became that Can AI flirt? was the wrong question. Instead we should ask: What will it flirt for? and Who wants that, and why? We tend to treat the subjects of robot consciousness and robot love as abstract and philosophical. But the better AI gets, the more clearly it reflects the people who create it and their priorities back at them. And what they are creating it for is not some abstract test of consciousness or love. The point of flirting AI will be to work.

The early automata were made as toys for royalty. But robots always have been workers. The Czech playwright Karel Čapek coined the word robot in 1921. In Slavic languages, robota means “compulsory labor,” and a robotnik is a serf who has to do it. In Čapek’s play R.U.R., the roboti are people who have been 3D-printed from synthetic organic matter by a mad scientist. His greedy apprentice then sets up a factory that sells the robots like appliances. What happens next is pretty straight anticapitalist allegory: A humanitarian group tries and fails to intervene; the robots rise up and kill their human overlords.

At the 1939 World’s Fair in New York, one of the most popular attractions was Elektro, a giant prototype of a robot housekeeper, made by Westinghouse. The company soon added Sparko, a robot dog. Both made the technologies the company was introducing appear nonthreatening. The reality, of course, was that automation would massively disrupt the midcentury economy. Following the robotics revolution of the 1960s, the automation of manufacturing, combined with globalization, decimated the livelihood of the American working class. The process has continued through the rest of the economy. Within a few years, “digital agents” may do the same to white-collar professionals. Report after report, by credible academics, has warned that AI will make huge sections of the American workforce obsolete over the coming decades.

Flirting might sound trivial. It is in fact a highly exacting test of intelligence.

In the 1980s, the sociologist Arlie Hochschild coined a term to describe the kinds of tasks that workers increasingly performed in an economy where manufacturing jobs had disappeared. She called them “emotional labor.” In an industrial economy, workers sold their labor-time, or the power stored in their bodies, for wages. In an economy increasingly based on services, they sold their feelings.

In many professions, individuals are paid to express certain feelings in order to evoke appropriate responses from others. A flight attendant not only hands out drinks and blankets but greets passengers warmly, and smiles through stretches of turbulence. This is not just service with a smile: The smile is the service. Around the turn of the millennium, the political theorists Michael Hardt and Antonio Negri redefined this form of work as “immaterial” or “affective labor.” Immaterial is the broader term, encompassing all forms of work that do not produce physical goods. Affective labor is a specific form, and involves projecting certain characteristics that are praised, like “a good attitude” or “social skills.” These kinds of jobs are next to be automated.

Andrew Gersick, the ethologist who was lead author of the 2014 paper on “covert sexual signaling,” told me he suspects that many behaviors that humans evolved in the context of courtship have been repurposed for other social contexts. Service and care workers—often female, in the modern workplace—deploy courtship gestures as part of their jobs. “Take a hospice nurse who greets a patient every day by asking, ‘How’s my boyfriend this morning?’” Gersick wrote in an email. “In a case like that, all parties involved (hopefully) understand that she isn’t interested in him as a potential sexual partner, but the flirty quality of her joking has a specific value. ... Flirting with your aging, bedridden patient is a way of indicating that you’re seeing him as a vital person, that he’s still interesting, still worthy of attention. ... The behavior—even without the sexual intent behind it—elicits responses that can be desirable in contexts other than courtship.”

AI that can flirt could have a huge range of applications, in fields ranging from PR to sex work to health care to retail. By improving on AIs like Siri, Cortana, and Alexa, technology companies hope they can convince users to rely on nonhumans to perform tasks we once thought of as requiring specifically human capacities—like warmth or empathy or deference. The process of automating these “soft” skills indicates that they may have required work all along—even from the kinds of people believed to possess them “naturally,” like women.

Some aspects of our own programming that AI reflects back are troubling. For instance, what does it say about us that we fear male-gendered AI? That our virtual secretaries should be female, that anything else would seem strange? Why should AIs have gender at all? What do we make of all the sci-fi narratives in which the perfect woman is less than human?

The real myth about AI is that we should love it because it will automate drudge work and make our lives easier. The more likely scenario may be that the better chatbots and AIs become at mimicking human social interactions—like flirting—the more effectively they can lure us into donating our time and efforts to the enrichment of their owners.

Last fall, with Sophia in Hong Kong, I drove to Lincoln, Vermont, in order to visit the closest substitute: BINA48. In 2007, Martine Rothblatt, founder of Sirius Radio, commissioned Hanson to build a robot as a vessel for the personality of her wife, Bina Aspen. Rothblatt and Aspen both subscribe to a “transreligion” called Terasem, which believes that technological advances will soon enable individuals to transcend the embodied elements of the human condition, such as gender, biological reproduction, and disease. Terasem is devoted to its four core principles: “Life is purposeful; death is optional; God is technological; love is essential.” BINA48 is the prototype for a kind of avatar that Terasem followers say will soon be able to carry anyone who wishes into eternity. As futuristic as it all sounds, it reflects an ancient impulse. Rothblatt wants the same thing the speaker of Shakespeare’s sonnets wanted: To save her loved one from time.

In practice, BINA48 serves mostly to draw attention to the Terasem movement. The man Rothblatt hired to oversee BINA48, Bruce Duncan, frequently shows the robot at museums and tech conferences. It has been featured in GQ and on The Colbert Report and interviewed by Chelsea Handler. When I visited the Terasem Movement Foundation last September, it was Duncan who welcomed me into the modest, yellow farmhouse where BINA48 is kept.

A wall of framed posters and press clippings about the Terasem movement, along with baskets of Kind bars and herbal tea sachets, gave the place the clean, anonymous feel of a study-abroad office at a wealthy university. The only giveaway that I was somewhere more unusual came from the android bust posed on the desk. The face was female, brown-skinned, and middle-aged. It had visible pores and a frosty shadow on its eyelids; you could see the tiny cilia in its nostrils. There were crow’s foot wrinkles at the edges of its eyes, visible signs of middle age. Shut off, its head tilted forward, like someone dozing in a chair. This was BINA48.