While ResearchKit will undoubtedly make good use of the sensors on the iPhone, combining the phone’s data with full genome sequencing and cheap, regular blood tests will give a much fuller picture of exactly what is happening inside our bodies, at a fraction of the cost of current technology.

Here come the matter compilers

To revolutionize drug discovery, you don’t just need to democratize information about our bodies; you need to democratize the manufacture of chemicals and medical equipment. This too is on the way. At the University of Glasgow, a team is working on a 3D chemical printer that uses chemical inks to assemble complex organic molecules. Their explicit goal is to “app” chemistry and allow individuals to print their own medicines.

Lee Cronin: Making matter come alive, from TED.com.

Another group at the University of Central Lancashire is also working on a chemical printer. They note that this manufacturing technique will enable advanced customization of dosages, even allowing patients to change their dose daily. The team expects hospitals and pharmaceutical companies to adopt chemical printing within 5 years, and the public to do so within a decade.

In addition to chemical printing, we are not far from a world in which it is common to print medical devices. A new company called Voxel8, co-founded by Harvard materials researcher Jennifer Lewis, has a printer and set of inks that can build a whole quadrocopter, using conductive materials for the electronics. Earlier, Lewis developed a method of printing lithium-ion batteries. Voxel8 claims that soon, their printers will be able to create hearing aids and other health wearables.

Medical equipment has long been overpriced due to its niche market status and the fact that third-party insurance companies foot the bill. 3D printing will ensure that much of the equipment necessary for drug research will be available to anyone for roughly the cost of materials.

Big data makes the scientific method obsolete

What about clinical trials? One of the most expensive parts of drug discovery is late-stage human trials. Researchers must find a sample of patients suffering from the relevant disease, randomly divide them into a treatment and control group, and administer the trial over an extended period of time.

No one is likely to shell out the millions of dollars necessary to run a clinical trial without a profit motive. Fortunately, we are getting better at inferring causation from strictly observational data. With new computing techniques and the explosion of data available to the subjects themselves, such costly trials will become increasingly unnecessary.

Randomized trials are used because otherwise a subject’s propensity to participate in either the control or treatment group might correlate with the outcome. If statisticians can develop an accurate model estimating each subject’s propensity to self-select into the treatment group, they can make comparisons between treatment and control subjects of equal propensity. Such propensity score matching allows causal inference from observational data, reducing the need for costly randomized trials.

Of course, this technique only works if the propensity model is accurate and not missing any significant unobserved variables. But with the flood of cheap medical data that is likely to soon arrive, and with enough participation, it should be possible to create good propensity score models.

An even more promising approach is to use deep learning, a buzzword describing recent significant advances in neural networks. In the last few years, computing has finally gotten cheap enough to train multi-layer (“deep”) neural nets on vast amounts of data, with promising results — for example, transcribing all the house numbers in France in under an hour, or understanding what is happening in a photo and writing a caption. Major tech companies are now racing to hire the top researchers in the field, with Google snagging Geoffrey Hinton and Facebook hiring Yann LeCun. Much of the magic coming out of Silicon Valley these days — from Siri and Google Now to the Facebook News Feed — runs on deep neural networks.

Deep learning has also been used for drug discovery. In 2012, a team led by Hinton won a contest to design software to identify drug candidates. They did so despite the fact that nobody on the team had a background in biochemistry and the dataset was not well tailored to deep learning. No wonder Chris Anderson has called big data “the end of theory.”

Putting it all together

If millions of participants voluntarily sync their copious medical data to the Internet, along with information about the drugs they are making and taking, costly randomized trials could become much less necessary. A medical version of Google Now or Watson or Siri could crunch the data and determine which molecules work on which subjects for which diseases. If any of a user’s levels were abnormal, he would receive some suggestions for drugs, some well understood and some not, that might help his problem. The user would make a choice of what to take, print it up at home, and continue to report back.

An additional benefit of this model is that it would not differentiate between the clinical trial stage and general usage. Often drugs must be pulled from the market once it is discovered that they have severe side effects that were not found in clinical trials. A crowdsourced model of drug discovery would continue to monitor drugs at all stages of development, and would more quickly root out drugs with negative side effects in the post-clinical trial phase. Better yet, the system could predict which individuals would suffer the side effects, and which would be immune, allowing more drugs to stay on the market.

The problem with Wikipedia, it has been said, is that it only works in practice. In theory, it can never work. Similarly, perhaps a crowdsourced model of drug discovery could counterintuitively end the stagnation in patent-fueled medical research.

The status quo, often defended by the right, is big drug companies incentivized by patent revenues. The leftist alternative to the status quo has always been to have the government take over the research labs, to provide the public good of drug research through direct taxpayer funding. Perhaps the real answer to Eroom’s Law is to free the individual to produce public goods directly, as is the case with Wikipedia.

This solution would obviously benefit from some changes in the law. First, we could really use permissionless innovation with respect to chemicals. Without relitigating familiar debates over recreational drugs, we need to allow people to experiment with medical drugs without restriction. If people knowingly consent to being test subjects in a crowdsourced medical experiment, the law ought to respect their autonomy and their decision. We must move past a world where regulators “approve” certain drugs.

Second, we need permissionless innovation with respect to medical data. Well-meaning legislation like HIPAA, meant to protect medical privacy, raises costly hurdles to the sharing of medical information. In an era of big data, however, sharing is synonymous with progress. We need to make it easier to legally share information about our bodies with whoever we choose.

But even without changes in the law, I wonder if the era of crowdsourced medical research can realistically be forestalled. People around the world are communicating as never before. Can they really be stopped? It is unclear whether the agents of Eroom’s Law will be able to stop people from communicating information about their bodies and certain promising molecules to one another.

Think different, indeed.