Until only a few generations ago, the prevailing conception of illness was that the sick were contaminated by some toxin or contagion or an excess of one humor or another. That understanding of illness contained within it the idea that these conditions could be improved by opening a vein and letting the sickness run out: bloodletting, the practice was called.

Once the toxins were gone, the patient immediately felt different, and often better. As anyone who has given blood can tell you, losing a pint or two can make you feel transported, transformed. Intuitively, it was satisfying to doctors that the procedure left the patient feeling drained -physically, emotionally and into the sink.

It is understood now that bloodletting only hastened the death of the ill. (George Washington had almost five pints of blood drained from him in the two days prior to his death; he had been suffering from a sore throat.) We know that bloodletting is unhelpful because a Parisian doctor named Pierre Louis did an experiment in 1836 that is now recognized as one of the first clinical trials. He treated people with pneumonia either with early, aggressive bloodletting or less aggressive measures; at the end of the experiment, Dr. Louis counted the bodies. They were stacked higher over by the bloodletting sink.

No sooner had the message about the dangers of draining blood out of patients been conveyed across the medical community -- and that took the rest of the 19th century -- than doctors developed a new passion for pouring it back into them. After crosstyping was invented and blood could be transfused safely, doctors quickly decided that very ill patients do better with as normal a level of hemoglobin as could be maintained. It made sense, and blood transfusions became a routine part of critical-care medicine.

Then just three years ago the results of a large study called Transfusion Requirements in Critical Care were published in The New England Journal of Medicine. Those results shook the community of intensive-care physicians worldwide. Except in the case of people with unstable angina and acute myocardial infarction, routine transfusion of critically ill people with moderate or mildly low hemoglobin levels does not decrease their mortality rate -- and in some subgroups, it actually increases the mortality rate. Nobody has a convincing explanation for why this is, but it is the case.

The essential tenet of evidence-based medicine is that patients, working with their physicians and armed with medical data, are better equipped to make decisions that work for them than doctors of the Marcus Welby model are, because they understand their own expectations better than their physicians can. Authority is devolved from expertise to the data and thus, ultimately, to the patient. In an E.B.M. world, the physician makes diagnoses, serves as a conduit of the medical data and is responsible for framing those data and putting them into context, but the responsibility for the decision becomes the patient's. Patients have always had the final say about whether to accept the recommendations of their physicians, but without the actual data in front of them, the decision has simply been whether or not to trust the wisdom of the physician. E.B.M. tries to move that judgment to the steadier ground of data.

The point isn't that some medical treatments don't work as well as it is thought, or even that in treating patients, doctors sometimes hurt them -- this has always been true. The point is that the conclusions doctors reach from clinical experience and day-to-day observation of patients are often not reliable. The vast majority of medical therapies, it is now clear, have never been evaluated by systematic study and are used simply because doctors have always believed that they work.