The claim is as bold as it is creepy: "Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness."

The data backing this claim—as you've likely heard by now—comes from an experiment conducted by Facebook on nearly 700,000 of its users without their knowledge. When the results were made public over the weekend, the understandable outcry against Facebook trying to manipulate people's emotions was swift and strong. But while Facebook's breach of informed consent seems pretty plain, what's less clear is how much "emotion contagion" the experiment actually inflicted.

That's because Facebook's experiment depends on "sentiment analysis"—algorithms that analyze text in an effort to tease out the emotions behind the words. Marketers have become especially excited about sentiment analysis in recent years because social media provides so much fodder for analyzing how consumers are feeling about a particular product. But even today's most sophisticated tools, while fascinating and increasingly powerful, still offer only ham-fisted approximations of anyone's emotional leanings. Computers, it turns out, still have a long way to go before they can really figure out how you feel, and that means Facebook's ability to understand and influence on your feelings is limited, too.

>Computers still have a long way to go before they can really figure out how you feel, and that means Facebook's ability to understand and influence on your feelings is limited, too.

To figure out whether Facebook alone could sway users' emotional states, the company tweaked their News Feeds and then tracked their reactions, running more than 3 million posts through third-party software that includes a dictionary of some 4,500 words and word stems that correspond to different emotions. According to Facebook, the results showed that emotions as expressed through the site were indeed infectious: "When positive expressions were reduced, people produced fewer positive posts and more negative posts," the study says. "When negative expressions were reduced, the opposite pattern occurred."

Setting aside the nuance lost by breaking down the human psyche to its barest possible binary—happy and sad—the Facebook study also relied on what sounds like a minimally viable approach to sentiment analysis. To determine whether posts were positive or negative, the study measured the "positive" and "negative" words in each post to determine what the study called its "emotionality." Status updates don't appear to have been weighed for shades of mood as determined by the context in which those words appear.

"The issue with this method is a complete inability to deal with sarcasm or words that can be used in a positive sense in specific contexts," says London-based coder Jonty Waering. For example, he explains, the phrase "damn good" would rank as negative because "damn" has a stronger negative connotation in the ranking system than "good" has positive, even though it is "obviously exceptionally positive." In response to the Facebook study, which he called "unethical," Waering wrote his own sarcastic browser extension called "A Better Place" that filters all negative tweets out of your Twitter stream. "Computers just aren't very good at subtlety," he says.

Even in the most stripped-down scenario however—where you assume everyone is being honest and unironic—sentiment analysis still has a long way to go. For basic binary choices, the most advanced sentiment analysis techniques have been shown to be between 70 and 80 percent accurate, says Marti Hearst, a professor at the University of California, Berkeley's School of Information and one of the early pioneers of text analytics in the late 1990s. That rate sounds pretty good, but consider that 50 percent accuracy is the equivalent of flipping a coin. "That's a really simple algorithm, and that's going to have a lot of error," Hearst says of binary sentiment analyses like the kind used in the Facebook study. "But when you do a study like this with hundreds of thousands of data points, you typically say: 'The error is going to come out in the wash.'"

>For a company with such a checkered history when it comes to transparency and privacy, no one needed an algorithm to predict the public response.

Facebook tried to play down the experiment's power to truly manipulate users' feelings. "At the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it—the result was that people produced an average of one fewer emotional word, per thousand words, over the following week," wrote Facebook data scientist Adam D. I. Kramer, the study's lead author.

But that raises the question of why Facebook risked such a foreseeable backlash by conducting the study in the first place. For a company with such a checkered history when it comes to transparency and privacy, no one needed an algorithm to predict the public response. The answer is that, as with any public company, Facebook needs to maximize profit, and to do so, it must continually improve its core product. And, as Facebook's critics are always fond of saying, that product is you—or, more precisely, Facebook's algorithmic understanding of you.

The better Facebook can train computers to "know" you, the more effective targeting it can promise advertisers. And to sell to you, Facebook doesn't have to know you perfectly. It just has to make a better guess than the competition. "The data business in general is tricky," says Jess Iandiorio, vice president at Acquia, a Boston-based marketing software maker. "You do the best you can with the insights you glean."