Google’s celebrated maxim is, “Don’t be evil.” But what would happen if the data collection giant (or other companies like it) decided to be just a little bit evil? Or decided that – in the name of research and its users’ best interests, of course! – its definition of “evil” (such an ugly word) might be a bit different than everyone else’s?

Those questions recurred to me this week when word spread (mainly through social networks, of course) that Facebook, another giant data-monger, had run an experiment to see if it could manipulate its users emotions.

In case you have been under a rock and otherwise without Wi-Fi, it turns out that Facebook, working with social scientists from Cornell University and the University of California, San Francisco, decided in 2012 to manipulate the site’s news feed – skewing it to show more happy content or more sad content – for nearly 700,000 users. They were trying to test the so-called “emotional contagion” effect – whether seeing your friends post happy or sad things on a social network can nudge your mood in that direction. The answer, it turns out, is yes, if just barely. The study’s authors found a one-tenth of 1 percent change, which doesn’t sound like much until you consider the scale of Facebook’s 1.23 billion monthly-user reach: “For example, the well-documented connection between emotions and physical well-being suggests the importance of these findings for public health,” the study’s authors wrote in the journal Proceedings of the National Academy of Sciences, adding that in early 2013, that one-tenth of 1 percent would have meant hundreds of thousands of users.

“The goal of all of our research at Facebook is to learn how to provide a better service,” one of the study’s authors, Adam D. I. Kramer, posted on the site earlier this week. “Having written and designed this experiment myself, I can tell you that our goal was never to upset anyone.” No doubt. But consider this: Per The Atlantic’s Robinson Meyer, the study also found that tweaking the site’s algorithm to omit emotional content led users to write less – the more bland your feed, the less engaged you are. While this is not surprising, it does suggest that Facebook has a vested interest in making sure its users see emotionally provocative content. What happens if the social networking service decides the way to keep users engaged is by downplaying emotionally neutral content? “So is it okay for Facebook to play mind games with us for science?” Forbes’ Kashmir Hill asked in light of the experiments. What about for profit?

Part of the reason the denizens of Facebook were taken by surprise that people were outraged the company had attempted to manipulate its users’ emotions may be because it conducts data experiments all the time. “Facebook’s Data Science team occasionally uses the information to highlight current events,” The Wall Street Journal reported. “Recently, it employed it to determine how many people were visiting Brazil for the World Cup.” The key difference, however, is between observation and interference, between analyzing data and seeing if you can move the numbers. The number of people visiting Brazil is the wrong analogue – an apples to apples comparison would be if Facebook tweaked its algorithm to see if it could encourage or discourage World Cup tourism, or whether it could manipulate betting on the tournament.

Here’s another arguably more troubling Facebook experiment about which you might not have heard: In 2010 the company set out to see if it could increase voter turnout in that November’s midterm elections. As Harvard’s Jonathan Zittrain wrote recently in The New Republic, tens of millions of Facebook users were shown “a graphic containing a link for looking up polling places, a button to click to announce that you had voted, and the profile photos of up to six Facebook friends who had indicated they’d already done the same.” Peer pressure has been demonstrated as an effective tool for increasing voter turnout, and that was the case here: The researchers figured that directly or indirectly they had moved 400,000 additional voters to the polls. Increasing voter turnout is, of course, a laudable goal, but what if Facebook decided to put its finger on the scale? Suppose someone at the company quietly decided that the country would be better off if more Democrats voted or more Republicans and acted accordingly?

Or take another experiment: Could a search giant like Google tip an election by tweaking how someone ranked in search results and what sort of results were displayed? Robert Epstein of the American Institute for Behavioral Research and Technology decided to find out, running an experiment in this year’s Indian elections. “That’s right, we deliberately manipulated the voting preferences of more than 2,000 real voters in the largest democratic election in the history of the world, easily pushing the preferences of undecided voters by more than 12 percent in any direction we chose,” he wrote in U.S. News last month, adding that, while he’s not suggesting that Google manipulates elections, if he ran the company, “I would have a crack team of my Mountain View prodigies studying and manipulating elections worldwide 24/7. Not doing so would be counter to the quest for profit.”

Don’t be evil is a sound maxim for a corporation to live by – but who gets to decide what is evil? What happens if, Hobby Lobby-like, a Google or a Facebook or other company develops a conscience and religious beliefs, and acts on them for their users’ own good? Would that be evil, strictly speaking? What if the users held different beliefs?