Christmas came early for the pharmaceutical industry this year. Last week, the Senate followed the House in passing the 21st Century Cures Act. Though this bill has been lauded by liberals for providing much-needed funds for medical research, its real impact will be elsewhere. Whereas drug approval traditionally required the demonstration of real clinical benefit in a randomized clinical trial, under the Act drug firms will increasingly be able to rely on flimsier forms of evidence for approval of their therapies (incremental steps in this direction, it is worth noting, have already occurred). The Act, by reconfiguring the drug regulatory process, lowers the standards for drug approval—a blessing for drug makers, but an ill omen for public health.



In the Senate, a grand total of five senators—including Bernie Sanders and Elizabeth Warren—voted against it. The media, meanwhile, has for the most part done a poor job dissecting its actual contents. As a result, few now realize how detrimental the act is likely to be for drug safety, or appreciate the mix of conservative ideology and pharmaceutical industry greed underlying the longstanding campaign that brought it to fruition.

The thinking behind the 21st Century Cures Act—and likeminded proposals—goes something like this: In the twenty-first century, the pharmaceutical industry—driven by the profit-motive—continues to do a fine job innovating new therapies. Far too often, however, they are being held back by risk-averse, slow-moving FDA bureaucrats with outdated standards for approval. “Modernize” the FDA—release the cures! Yet if the law did nothing other than weaken FDA standards, it may not have passed: Liberals understandably embraced the act’s new NIH funding, its mental health provisions, and its support for state anti-opioid programs. For Democrats, it also represented the sort of bipartisan “victory” that shows that all is not gridlock in Washington, after all.

Yet this thinking is flawed on multiple levels: “We need to remember,” as former editor-in-chief of the New England Journal of Medicine Marcia Angell wrote in her 2004 pharmaceutical exposé, The Truth About the Drug Companies, “that much of what we think we know about the pharmaceutical industry is mythology spun by the industry’s immense public relations apparatus.” First among these myths is the notion that the status quo of private sector drug research and development is the best of all worlds. On the contrary, as Angell put it, “me-too” drugs—lucrative, duplicative agents that do not improve on existing therapies—are in fact the “main business of the pharmaceutical industry.” We can’t rely on the profit motive to bring forth new cures, when it’s just as easy for companies to make big profits by redesigning or tweaking drugs that already exist.

Second, the notion of a slow-moving, risk-averse FDA is wrong: If anything, the agency’s drugs review process is sometimes too hasty, while its standards of evidence for approval are frequently too lax. Consider, for instance, two recent studies of new cancer drugs. The first—published a year ago in JAMA Internal Medicine by Chul Kim and Vinay Prasad—looked at cancer drugs approved by the FDA on the basis of “surrogate endpoints” between 2008 and 2012. “Endpoints” is a term for outcomes: Hard clinical endpoints refer to outcomes such as survival, where the benefit to the patient is unambiguous. Surrogate endpoints, however, refer to metrics like the change in the size of a tumor on a CT scan. Though a shrinking tumor logically sounds like a good outcome, it is only meaningful if it actually translates into an improvement that an individual actually experiences, like a longer life or a better life. Often, however, that’s not the case: New therapies can change numbers without improving our actual health. This is what Kim and Prasad found: Of the 36 drugs approved on the basis of surrogate endpoints, at least half had no demonstrated benefit.