But if thought corrupts language, language can also corrupt thought. (George Orwell)

The Secret Experiment

It’s January 11th, 2012. By the end of that year, more than 1 billion people would have visited Facebook every month. Facebook’s analytics had long made that prediction — it was only a matter of time.

Facebook’s Core Data Science team, led by Adam Kramer, had bigger fish to fry. They had been preparing to collect data for an experiment in collaboration with Cornell University. For a week, usage information would be collected from about 700,000 Facebook users, yet none of them were to know it was happening.

The study’s design was simple:

Users’ news feeds would be altered to display disproportionately more ‘positive’ posts (puppies, friends, and food), or ‘negative’ posts (war, disaster, and death). Here’s what they found at the end of the week:

These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. (Kramer et al.)

In other words, when users saw more positive content, they posted more positive content. And when they saw more negative content, they posted more negative content.

When the paper was published in 2014, it sparked a massive uproar. Most saw it as an insidious breach of privacy. The fact that none of the people ‘participating’ in the study were informed of its existence was, naturally, a common point of contention. In anticipation of this response, Kramer and his team noted the following in their publication:

[The work] was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research. (Kramer et al.)

Technically, the users had read and signed the consent forms years in advance. Cornell didn’t intervene, as Facebook claimed the data was for their internal use only. Cornell has since removed the page on which they confirmed that decision.

A second conclusion the researchers made, citing it a direct result of their study’s findings, is that it debunks research that shows overwhelmingly negative feelings associated with seeing your Friends’ lives being awesome (see below).

There is a valid, and in my opinion more plausible, alternative interpretation. It is likely that when users were exposed to more ‘positive’ content, they felt pressured to match or surpass that standard. Their subsequent post can still classified as ‘positive’, but perhaps the response, often placed in direct comparison to their Friends’ performance, left them with feelings of inadequacy, isolation and failure. This interpretation shows the inverse of what the researchers propose: an association between positive exposure and negative emotions.