Facebook has been experimenting on us. A new paper in the Proceedings of the National Academy of Sciences reveals that Facebook intentionally manipulated the news feeds of almost 700,000 users in order to study “emotional contagion through social networks.”

The researchers, who are affiliated with Facebook, Cornell, and the University of California–San Francisco, tested whether reducing the number of positive messages people saw made those people less likely to post positive content themselves. The same went for negative messages: Would scrubbing posts with sad or angry words from someone’s Facebook feed make that person write fewer gloomy updates?

They tweaked the algorithm by which Facebook sweeps posts into members’ news feeds, using a program to analyze whether any given textual snippet contained positive or negative words. Some people were fed primarily neutral to happy information from their friends; others, primarily neutral to sad. Then everyone’s subsequent posts were evaluated for affective meanings.

The upshot? Yes, verily, social networks can propagate positive and negative feelings!

The other upshot: Facebook intentionally made thousands upon thousands of people sad.

Facebook’s methodology raises serious ethical questions. The team may have bent research standards too far, possibly overstepping criteria enshrined in federal law and human rights declarations. “If you are exposing people to something that causes changes in psychological status, that’s experimentation,” says James Grimmelmann, a professor of technology and the law at the University of Maryland. “This is the kind of thing that would require informed consent.”

Ah, informed consent. Here is the only mention of “informed consent” in the paper: The research “was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research.”

That is not how most social scientists define informed consent.

Here is the relevant section of Facebook’s data use policy: “For example, in addition to helping people see and find things that you do and share, we may use the information we receive about you … for internal operations, including troubleshooting, data analysis, testing, research and service improvement.”

So there is a vague mention of “research” in the fine print that one agrees to by signing up for Facebook. As bioethicist Arthur Caplan told me, however, it is worth asking whether this lawyerly disclosure is really sufficient to warn people that “their Facebook accounts may be fair game for every social scientist on the planet.”

Any scientific investigation that receives federal funding must follow the Common Rule for human subjects, which defines informed consent as involving, among other things, “a description of any foreseeable risks or discomforts to the subject.” As Grimmelmann observes, nothing in the data use policy suggests that Facebook reserves the right to seriously bum you out by cutting all that is positive and beautiful from your news feed. Emotional manipulation is a serious matter, and the barriers to experimental approval are typically high. (Princeton psychologist Susan K. Fiske, who edited the story for PNAS, told the Atlantic that this experiment was approved by the local institutional review board. But even she admitted to serious qualms about the study.)

Facebook presumably receives no federal funding for such research, so the investigation might be exempt from the Common Rule. Putting aside the fact that obeying these regulations is common practice even for private research firms such as Gallup and Pew, the question then becomes: Did Cornell or the University of California–San Francisco help finance the study? As public institutions, both fall under the law’s purview. If they didn’t chip in but their researchers participated nonetheless, it is unclear what standards the experiment would legally have to meet, according to Caplan. (I reached out to the study authors, their universities, and Facebook, and will update this story if they reply.)

Even if the study is legal, it appears to flout the ethical standards spelled out in instructions to scientists who wish to publish in PNAS. “Authors must include in the Methods section a brief statement identifying the institutional and/or licensing committee approving the experiments,” reads one requirement on the journal’s website. (The study did not.) “All experiments must have been conducted according to the principles expressed in the Declaration of Helsinki,” reads another. The Helsinki standard mandates that human subjects “be adequately informed of the aims, methods, sources of funding, any possible conflicts of interest, institutional affiliations of the researcher, the anticipated benefits and potential risks of the study and the discomfort it may entail.”

Over the course of the study, it appears, the social network made some of us happier or sadder than we would otherwise have been. Now it’s made all of us more mistrustful.