Let’s play a game—thought experiment. Imagine it’s the near future. You’re walking along a city street crowded with storefronts. As you walk past boutiques, cafes, and the Apple Store, your visage follows you. Thanks to advances in facial recognition and other technologies, behavioral marketers have developed the capacity to take your Facebook profile, transform it into a 3-D image, and insert it into ads. That sweater you’re eyeing? In the display, the mannequin wearing it takes on your face and shape. The screen showing a car commercial depicts you behind the wheel. At a travel agency (let’s pretend they still exist—after all, this is a thought experiment!), you see yourself sunning on a beach, while the real you is bundled up against the cold. The ads might show you with an attractive stranger or a lost love (after all, Facebook knows whom you used to date). Or they could contain scenes of you and your happy family. No longer do you have to picture yourself in the ad—technology has that covered.

Although the technology in our thought experiment doesn’t yet exist, many of the necessary components already do. There is Autodesk 123D Catch, a program that uses computer vision technology to transform simple photographs into 3-D objects. Facebook has its own recognition tools to help users identify and tag photos. Video games generate avatars using sophisticated motion capture techniques.

It seems that commercial actors and advertising models are well on their way to losing their jobs as the consumer becomes the star. Online ads already allow for browsed items, like a pair of shoes, to follow one across the Internet like a “persistent salesman” in a practice called personalized retargeting. H&M uses “completely virtual models” to market its clothes. These practices can sometimes seem creepy at first, but customers generally adapt to them—and such advances are making users more open to the types of ads and content they see and even expect. While the highly developed personalized advertising featured in Minority Report remains science fiction, less sophisticated, but still promising versions have been rolled out.

If the technology described in the thought experiment gets developed, how might it be used?

It is especially hard to predict the future of behavioral advertising, given the headline-grabbing controversies Facebook and other outlets inching toward avatar advertisement have faced. There are serious questions about consent, privacy, and data security. Debate rages between industry advocates who want the field of behavioral advertising to remain self-regulating and those endorsing government regulation. Even the seemingly simple solution of opting out is contentious. Some are satisfied with the National Advertising Initiative’s tool, which allows users to opt-out of behavioral advertising from “member companies” that placed cookies on one’s computer, while others defend the stronger Internet equivalent of the “Do Not Call” registry, i.e., the “Do Not Track” option.

This conflict has serious economic consequences. Forbes notes that in order to prevent government regulation, “Ad technology companies will have to spend as much on privacy issues this year and next as they will on developing their new technologies and figuring out when to sell out to Google, Yahoo, or Facebook.” Lobbying efforts will intensify accordingly.

Given all this industry uncertainty, it would be foolish to attempt to predict what will happen. Nevertheless, we can consider several potential scenarios, each with its own ethical trouble spots.

For instance, consumers themselves could become split on the technology. Perhaps the most desirable consumer—affluent, young, early adopters—will see the new advertising technology as cool and fun. Accordingly, a company—if permitted by law—may launch its avatar campaign by opting users into the advertising. But unless a massive cultural shift takes place, smart money says the privacy bell will ring. Companies tend to announce data ownership and management policy in fine print, buried deep in long user agreements that hardly anyone reads. Consequently, sometimes people develop their own expectations about how their data will be used. “I didn’t imagine my image would appear here!” some users might exclaim when first encountering a targeted avatar ad at their favorite grocery store; perhaps they might expect it to appear only in the clothing store, say, where they first opted in to the technology. This shock could prompt vocal concern about images being transferred to undesirable locations.

Advertisers may also have to make hard decisions about how to present these avatars. Right now, despite all the advances in computational power, it remains impossible to digitally duplicate reality. What’s a company to do when it can’t quite make images walk like us, talk like us, or gesture like us? They certainly won’t alienate customers by making them uglier … unless they want to entice users to feel bad about themselves and invest in beauty, fitness, and sartorial products to change things. Or, perhaps companies will use avatars plucked from the “uncanny valley” to entice consumers to purchase upgrades to adorn their virtual selves. If such ads become ubiquitous, this may feel like extortion.

When Jean-Paul Sartre famously said “hell is other people,” he meant life can suck when others don’t affirm our idealized self-conceptions. Advertisers know this. Translating vanity to the visual, they could go with idealized avatars—slimmer, smoother-skinned versions of their real customers. While flattery isn’t inherently a problem, too much distortion can be dangerous. If idealization sucks the viewer in too deeply, the problem of deceptive practice, already at issue in the controversy over “magic mirrors,” could become a hot issue. Shirts simply look much better on the Brad Pitt and Angelina Jolie avatar versions of us—perhaps too good.

Even if the idealization strategy is used, customers may see through the airbrushing. Undue comparison, with all its associated psychosocial baggage, could rear its ugly head: Young women who berate themselves for being fatter than Size 0 models could fixate on having skinnier avatars. Or the esteem of skinny boys could drop in the presence of their buff doppelgängers. And, since advertising history displayed insensitivity to race, sex, and class, it would be naive to believe that avatars with lighter skin and fancier bling couldn’t possibly get rolled out.

It’s possible that a dominant company could, taking a page from the Nintendo Wii, let users quickly design custom avatars that can be inserted into a range of fun and informative networking functions, but only so long as they also allow the images to be used for commercial purposes. For example, in order to send avatar-texts or avatar-e-mail, or to create avatar-status-updates, you might need to give the companies broadcasting Grandma’s favorite shows the right to bombard her with avatar ads, ones imploring her to buy little Johnny and Susie the latest overpriced toys. She could get powerful visual displays suggesting that her grandkids, who don’t make much time for her anyway, will like her even less if she doesn’t step up and make the purchases. When the practice matures, people might very well feel like social outcasts if they don’t participate.

The problem thus arises of whether it becomes too expensive to opt out of dominant norms. We already know that “the Facebook-free life has its disadvantages in an era when people announce all kinds of major life milestones on the Web.” The same may come to be said for those living off the avatar grid. To signal conformity, parents could end up bragging that little Johnny or Susie made it to the “avatar stage of development”—a point at which they can identify an avatar as a representation of themselves—before classmates.

To get a sense of how resonant these concerns are, we presented a modified version of the thought experiment to college students enrolled in a course on the philosophy of technology at the Rochester Institute of Technology. This scenario featured personalized avatars inserted into Internet ads—a possibility that future historians might call Targeted Online Advertising 2.0. To heighten the thought experiment, we visited RIT’s Motion Capture Room and demoed technology that quickly converts 2-D into 3-D images, explaining how advertisers could capitalize on similar tools. Braced for vocal concern and hot debate, we instead encountered perplexity. Most students weren’t too bothered. Frankly, they were surprised we expected trepidation.

Let’s imagine that we ran a similar thought experiment several years ago. Would students have been more agitated then? Probably. When social networking was in its infancy, thought experiments about new and transformative practices could seem startling; infringing on privacy and liberty could seem like a big deal. But now, things are different, at least for the younger, tech-savvy crowd. Generation Z can’t conceive of opting out of software that fuels mainstream social networking and online communication. Instead of worrying about major losses in privacy and liberty, they focus on peer behavior. If every else who matters accepts digital trade-offs, why make life hard by being an outlier? After all, the benefits appear to outweigh the costs, and adaptation seems inevitable. Jules Polonetsky, director and co-chair of the Future of Privacy Forum, believes that “young people do care about privacy,” but “they seem to be more concerned about parents, teachers or employers peeking at their online activity than privacy intrusions by marketers.”

The students might be right. But, then again, people regularly underestimate how easily their behavior can be modified. To the delight of ad executives, adults mistakenly believe that their preferences can be shaped only by powerful but regulated techniques, like subliminal advertising. Contrary to this folk psychology, current behavioral marketing trends are promising precisely because viewers powerfully respond to “relevancy” in targeted and contextual ads. Three questions thus arise. How much more persuasive power would come from adding the thought experiment features? If lots, how much is too much? And should the ability to think for ourselves be compromised, how long would it take regulators to acknowledge this truth and make appropriate adjustments? Polonetsky notes, “In its proposed new children’s privacy regulation, the FTC has already moved to restrict behavioral ads targeted to kids, without their parents express consent.” Protecting kids is always a step in the right direction. Crafting policy that protects adults without being too paternalistic is a more complicated endeavor.

This article arises from Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.