The past few days have seen Facebook under fire over the announcement that it has been running secret “mood altering” experiments on members. In summary: Their technology has allowed them to alter the content displayed to members in a way that elicits a measurable emotional response.

This is not the first time that Facebook has run into trouble for how it handles user data. In a world where corporations increasingly aim for forgiveness after the fact rather than permission, it’s important to consider these events in a historical context. For every person who is up in arms about the manipulation, there is another who is willing to let it slide as ‘the cost of doing business’ with a company like Facebook.

Comparisons have been drawn between Facebook’s experimentation and other examples of engagement engineering, such as Amazon’s product recommendations: Fundamentally, both companies alter content for an internal objective. Facebook alters the content of user’s timelines to elicit a response, which has been met with outcry rather than the quiet acceptance of Amazon’s personalised recommendations.

“The recent ruckus is “a glimpse into a wide-ranging practice,” said Kate Crawford, a visiting professor at the Massachusetts Institute of Technology’s Center for Civic Media and a principal researcher at Microsoft Research.

To some, it’s about how the practices of a company deviate from their stated intent. Amazon preys on the personal data of consumers to boost sales – which is an obvious goal for their business. What does Facebook stand to gain from toying with the emotions of their members? The big question mark beside their motive makes the operation seem particularly fishy.

Continuing the Amazon comparison, which has been used as a common defence of Facebook’s actions, there’s the issue of informing participants. Amazon very clearly advertises personalised product recommendations as related to items you have viewed or purchased. Nowhere on Facebook’s timeline will you find a post tagged as “Here to cheer you up”.

The biggest issue, perhaps, is the lack of oversight. In the field of psychology there are guidelines, ethics codes, and declarations that manage practitioners to ensure a standard of care for subjects. When it comes to companies like Facebook unleashing the power of data on their audience, there is merely their own terms of service (and possible legal precedence) looking over their shoulder.

“There’s no review process, per se,” said Andrew Ledvina, a Facebook data scientist from February 2012 to July 2013.

Mark Zuckerberg, an individual, has almost undisputed governance of Facebook. At his fingertips are 1.28 billion active members around the world*, and he has the ability to segment them in countless ways (geography, interests, race, gender) and alter their mood. He’s not likely to have the mindset of a Bond villain, but that is an awful lot of power for one man to wield.

Whether or not Facebook is acting for the good of mankind is up for debate, and they’ve certainly been praised by academic institutions for the amount of research they publish in the public domain. What’s clear is the need for a larger debate about the oversight of large social networks, as their data analysis capabilities allow greater control of their audience.

(*As of June 30th, you can be sure that includes an additional 200 million active Instagram users.)

Follow @DataconomyMedia

(Image Credit: Taco Ekkel)