Cary Gross is a professor of medicine and cancer researcher at Yale University School of Medicine.

When my 80-year-old father was diagnosed with Hodgkin’s disease, he was so weak that he could no longer walk, and his oncologist worried that chemotherapy might do more harm than good. But there was a new drug available, a “targeted therapy” that uses antibodies to destroy cancer cells while apparently leaving the rest of the patient’s cells alone.

My father understood that signing on for a treatment that hadn’t been available for very long was risky. “How do we know if the new drug is really better than the chemotherapy?” he asked me one day on the phone. “Isn’t this the kind of research that you do?”

Indeed it is. I’m a cancer outcomes researcher. I study whether new cancer treatments that succeed in initial small studies actually help people once adopted into routine clinical practice. With no large, reliable studies of this particular treatment to guide his decision, I told him to go with his gut.

Looking back, I wonder whether my father would have chosen the new therapy if he had known more about the possible side effects. Initially, the tumors shrank and he regained some strength, even allowing him to walk across the room on his own. But after a few months, he noted some mild pain in his feet. Soon, severe pain shot through his legs. His doctor explained that this “nerve pain” was a side effect from the therapy, preventing him from walking and once again making him bed-bound. This time, it wasn’t the disease that was debilitating him; it was the treatment.

Like many people in his shoes, my father opted to try the new drug because he thought it might help. It was expensive, but his insurance would cover it, and the high price seemed to suggest it was special. It was also better than doing nothing.

The Food and Drug Administration had approved the treatment based only on a small study of about 100 patients, one-third of whom demonstrated complete remission. Although side effects were rare, the average age of patients in the study was 31. This is typical of cancer-treatment studies, which most often test new drugs in younger and healthier people — not older people with lots of medical conditions. New cancer treatments often represent important scientific advances, but the actual impact on patients is almost always far from a slam-dunk. The absence of data oversimplifies complex new treatments as shiny black boxes. There appear to be no trade-offs to consider. The default for many patients looking for treatment is “yes” — cost be damned.

Unfortunately, our government’s commitment to evaluating new drugs is about to take a step backward. At a recent meeting with pharmaceutical company executives, President Trump announced he would be cutting regulations “at a level nobody’s ever seen before.” His candidates for FDA commissioner share the view that new drugs need to reach the market more quickly and with fewer required studies.

The 21st Century Cures Act has already created a pathway for companies to obtain FDA approvals with less rigorous evidence. At the same time, large funders of research that study the safety and effectiveness of drugs, such as the Agency for Healthcare Research and Quality and the Patient-Centered Outcomes Research Institute, face uncertain futures in the current Congress.

To be sure, patients deserve prompt access to effective, cutting-edge treatments. But what if a treatment turns out to do more harm than good? This is why we need to double down on our efforts to evaluate new treatments in the real world. We need more evidence, not less.

The FDA shouldn’t shy away from requiring thorough evaluation of new drugs. The same level of enthusiasm and funding that goes into developing new treatments should be invested in testing whether they are safe and effective in patients outside of the initial small trials. Undertested drugs with unclear safety profiles and efficacy should not be given to broad swaths of the population.

During one of the last conversations I had with my father before he died, he asked, “Shouldn’t you be studying this in people like me?” A rhetorical question of sorts, but I still answered: “Yes, Dad, we should.”