Do we need institutional review boards for human subjects research conducted by big web companies?

Web companies have been doing human subjects research for a while now. Companies like Facebook and Google have employed statisticians for almost a decade (or more) and part of the culture they have introduced is the idea of randomized experiments to identify ideas that work and that don’t. They have figured out that experimentation and statistical analysis often beat out the opinion of the highest paid person at the company for identifying features that “work”. Here “work” may mean features that cause people to read advertising, or click on ads, or match up with more people.

This has created a huge amount of value and definitely a big interest in the statistical community. For example, today’s session on “Statistics: The Secret Weapon of Successful Web Giants” was standing room only.

Can't get into the session I wanted to attend, "Statistics: The Secret Weapon of Successful Web Giants" #JSM2014 pic.twitter.com/y6KNnPfDe2 — Hilary Parker (@hspter) August 5, 2014

But at the same time, these experiments have raised some issues. Recently scientists from Cornell and Facebook published a study where they experimented with the news feeds of users. This turned into a PR problem for Facebook and Cornell because people were pretty upset they were being experimented on and weren’t being told about it. This has led defenders of the study to say: (a) Facebook is doing the experiments anyway, they just published it this time, (b) in this case very little harm was done, © most experiments done by Facebook are designed to increase profitability, at least this experiment had a more public good focused approach, and (d) there was a small effect size so what’s the big deal?

OK Cupid then published a very timely blog postwith the title, “We experiment on human beings!”, probably at least in part to take advantage of the press around the Facebook experiment. This post was received with less vitriol than the Facebook study, but really drove home the point that large web companies perform as much human subjects research as most universities and with little or no oversight.

The same situation was the way academic research used to work. Scientists used their common sense and their scientific sense to decide on what experiments to run. Most of the time this worked fine, but then things like the Tuskegee Syphillis Study happened. These really unethical experiments led to the National Research Act of 1974 which codified rules about [Web companies have been doing human subjects research for a while now. Companies like Facebook and Google have employed statisticians for almost a decade (or more) and part of the culture they have introduced is the idea of randomized experiments to identify ideas that work and that don’t. They have figured out that experimentation and statistical analysis often beat out the opinion of the highest paid person at the company for identifying features that “work”. Here “work” may mean features that cause people to read advertising, or click on ads, or match up with more people.

This has created a huge amount of value and definitely a big interest in the statistical community. For example, today’s session on “Statistics: The Secret Weapon of Successful Web Giants” was standing room only.

Can't get into the session I wanted to attend, "Statistics: The Secret Weapon of Successful Web Giants" #JSM2014 pic.twitter.com/y6KNnPfDe2 — Hilary Parker (@hspter) August 5, 2014

But at the same time, these experiments have raised some issues. Recently scientists from Cornell and Facebook published a study where they experimented with the news feeds of users. This turned into a PR problem for Facebook and Cornell because people were pretty upset they were being experimented on and weren’t being told about it. This has led defenders of the study to say: (a) Facebook is doing the experiments anyway, they just published it this time, (b) in this case very little harm was done, © most experiments done by Facebook are designed to increase profitability, at least this experiment had a more public good focused approach, and (d) there was a small effect size so what’s the big deal?

OK Cupid then published a very timely blog postwith the title, “We experiment on human beings!”, probably at least in part to take advantage of the press around the Facebook experiment. This post was received with less vitriol than the Facebook study, but really drove home the point that large web companies perform as much human subjects research as most universities and with little or no oversight.

The same situation was the way academic research used to work. Scientists used their common sense and their scientific sense to decide on what experiments to run. Most of the time this worked fine, but then things like the Tuskegee Syphillis Study happened. These really unethical experiments led to the National Research Act of 1974 which codified rules about](http://en.wikipedia.org/wiki/Institutional_review_board) to oversee research conducted on human subjects, to guarantee their protection. The IRBs are designed to consider the ethical issues involved with performing research on humans to balance protection of rights with advancing science.

Facebook, OK Cupid, and other companies are not subject to IRB approval. Yet they are performing more and more human subjects experiments. Obviously the studies described in the Facebook paper and the OK Cupid post pale in comparison to the Tuskegee study. I also know scientists at these companies and know they are ethical and really trying to do the right thing. But it raises interesting questions about oversight. Given the emotional, professional, and economic value that these websites control for individuals around the globe, it may be time to discuss whether it is time to consider the equivalent of “institutional review boards” for human subjects research conducted by companies.

Companies who test drugs on humans such as Merck are subject to careful oversight and regulation to prevent potential harm to patients during the discovery process. This is obviously not the optimal solution for speed - understandably a major advantage and goal of tech companies. But there are issues that deserve serious consideration. For example, I think it is no where near sufficient to claim that by signing the terms of service that people have given informed consent to be part of an experiment. That being said, they could just stop using Facebook if they don’t like that they are being experimented on.

Our reliance on these tools for all aspects of our lives means that it isn’t easy to just tell people, “Well if you don’t like being experimented on, don’t use that tool.” You would have to give up at minimum Google, Gmail, Facebook, Twitter, and Instagram to avoid being experimented on. But you’d also have to give up using smaller sites like OK Cupid, because almost all web companies are recognizing the importance of statistics. One good place to start might be in considering new and flexible forms of consent that make it possible to opt in and out of studies in an informed way, but with enough speed and flexibility not to slowing down the innovation in tech companies.

Please enable JavaScript to view the comments powered by Disqus.

Disqus