CLINICAL TRIALS – Human clinical trials are an important last hurdle in the development of new drugs and therapies. Today, The Conversation takes a closer look at this vital scientific endeavour with three articles that look at different aspects of the process.

When we see our doctor, we often walk away with a prescription. We suppose the doctor knows what she is doing when she writes it, so we take our medicine.

But how does she know? What grounds the scientific reasoning for the prescription that she doubtless can offer? Biological science has never been an entirely reliable guide to clinical care because people are complicated and different to the lab organisms from which biology draws its knowledge.

The surest science on which prescribing can be based is the controlled clinical trial, in which a defined intervention is either given or not given to a group of similar patients with a carefully defined condition and the outcomes compared.

Doctors read the results of these trials in medical journals, and they are also cited in drug marketing. We, and they, typically assume these published trial results are both truthful and representative of the drug’s effects.

Sadly, we are wrong. Their truthfulness is often debatable (depending on definitions of “truth”, naturally), even if every published sentence considered separately contains no falsehood.

Image from shutterstock.com

Still worse, it is a well-established fact that the clinical trial literature represents new drugs in a falsely favourable light – especially trials sponsored by the drugs’ manufacturers. In effect, the scientific publications are advertising, as surely as the glossy ads on the pages of the same medical journals.

How did we get here, and what exactly is the problem with this situation?

The early years

At the end of the 19th century, the practise of medicine had very little to do with science. Still, science was affecting the way doctors thought; in its theoretical basis, the ancient Western art of healing was undergoing a revolution. Laboratory experimentation with animals had recently revealed the chemical and physical functions of the organs, and many diseases had been traced to microbes that could be studied outside the body.

In new secular hospitals, recently placed under the authority of medical schools, it became possible to study large numbers of patients with similar conditions. These investigations produced much more reliable descriptions of diseases, and their natural course. They also enabled accurate assessment of the impact of treatments on death rates.

Large prospective studies, some of the earliest clinical trials, showed that the West’s traditional therapies, such as bleeding and purging as well as drugs, did more harm than good. Many physicians devoted themselves to research in the hope that science could also find a way forward.

National Maritime Museum

Eventually, late in the century, breakthroughs were achieved, especially against germs. In the 1860s and 1870s, Joseph Lister and other surgeons found that keeping wounds sterile greatly improved surgical recovery rates; in the 1880s, Louis Pasteur developed a vaccine that cured rabies; in the 1890s, Paul Ehrlich and Emil Adolf Von Behring developed a serum that cured the common, deadly diphtheria infection.

These treatments were first tested on animals, and then tried on patients in a condition poor enough that, based on historical observation of similar cases, survival was doubtful. While such tests were not supported by careful statistical analysis, the benefits of the treatments were dramatic enough to convince even sceptics of their value. Their triumphant passage from lab bench to bedside solidified the status of science as the key to medical progress.

Enter the pharmaceutical industry

The scientific trend in medicine at first had limited impact on the pharmaceuticals industry, whose leading products at the turn of the century – and indeed, three decades later – were brand-named tonic concoctions of old and mostly (apart from the ubiquitous cocaine and opium) useless herbs, often concealed by secret formulas.

During the first half of the century, many countries introduced laws requiring the truthful listing of active ingredients and eventually, in a few, regulations requiring toxicity testing with animals. But no country required clinical trials – that is, systematic testing in people. So even the best manufacturers continued to market new drugs on the basis of testimonials – enthusiastic case reports from friendly doctors given free “trial” samples.

Wikimedia commons.

Still, a few firms showed that cooperating with the reforms could be profitable, by sourcing new drugs from the latest lab research, testing them under controlled conditions, and turning them into products acclaimed by scientific medicine. Key examples included the anti-syphilis drug Salvarsan marketed by Hoechst in 1910, and the insulin preparation discovered at University of Toronto and manufactured by Lilly from 1922.

To encourage the scientific drug development trend, reforming medical professors and journal editors in the United States organised an expert panel to review scientific evidence behind safety and efficacy claims in drug advertising. Without the evidence to win the panel’s seal of approval, drugs could not be advertised in the major journals.

By the 1930s, the medical elite’s efforts to reshape the thinking and values of practitioners along more scientific lines had affected the pharmaceuticals market. Since doctors wanted to prescribe modern treatments developed through laboratory science, and were impressed with clinical trials showing drugs to be effective by careful comparison with another treatment (or placebo), drug firms wanted to offer such products.

As a result, major drug companies began routinely sponsoring rigorous clinical trials of their new products. In general, they relied on medical academics and other reputable physician-investigators to win the scientific credibility that they sought. These clinical investigators typically recruited their own patients, both in private practice and in the hospital services they often oversaw, just as they did when conducting clinical research on their own account.

Seattle Municipal Archives

Among both the clinical trials organised by drug firms and those by researchers seeking only knowledge, it was uncommon to inform patients fully before enrolling them in a clinical trial. As a result, many patients were not offered the choice of standard therapies with known efficacy. This occurred despite the undisputed acceptance of the principle, enunciated in the Nazi doctors’ trials at Nuremburg, that people should not unwillingly be subjected to experiments with a significant chance of leaving them worse off.

Western doctors evidently felt secure in their humanitarian motives, even though they typically were paid for their participation in commercially sponsored trials. It was also typical that their “unsuccessful” clinical trials would remain unpublished by sponsor preference, depriving medicine of the opportunity to learn from its mistakes. At the time, this too was not recognised as an ethical problem.

Rise of Big Pharma

Although no country had laws requiring clinical testing proving a drug effective before it could be marketed, by the 1950s it had become almost a commercial necessity to claim a new drug’s clinical superiority when introducing it. And the 1950s mark the high point of the drug industry’s chemical ingenuity, with hundreds of new compounds entering the market each year.

As a result, there was an explosion of clinical testing – but not much of it meeting the new gold standard of quality, the double blind randomised controlled trial (RCT), featuring multiple checks to ensure that those conducting a study cannot unwittingly influence the outcome.

By the 1960s this wild west situation gave rise to scandalous abuses. Clinical testing was important both to advertising claims and to medical careers, but no impartial body monitored either trial rigour or patient treatment (including the American journal editors whose system lapsed in the 1950s). In some cases it emerged that claimed animal safety testing or clinical trials for efficacy had in fact not been conducted and the results fabricated.

Image from shutterstock.com

In other cases, patients with curable illnesses were left untreated so that they could serve more effectively as the control group, without their knowledge. In a few, patients were actually given illnesses and then denied effective treatment – if any existed.

Stronger regulatory laws were enacted, requiring that new drugs show at least equal efficacy and safety in humans compared with existing therapies, and forcing companies to make patient data in sponsored trials available to regulators. Human subject protection regulations were also established, most based on the principle of autonomy – that patients must make free and informed decisions about participating in experiments.

Conflict of interest began to achieve some recognition as an ethical problem too, although for a long time disclosure of financial links between trial sponsors and physician investigators were restricted to private documents like grant applications and internal ethics applications.

Modern rigour

The 1970s were the age of rigour in clinical trials, with the treatment of human subjects, and especially the design of trials more circumscribed.

But it was not to last. In the 1980s patient activists, those with cancer and soon after with AIDS, became convinced by enthusiastic researchers that cures were available but withheld by overly stringent testing requirements. They developed the political muscle to “access” unproven new drugs, on the common (but false) presumption that new drugs are much better than old ones.

Flickr/pzaia

Drug companies quickly recognised activists as allies in their efforts to get their products on the market as quickly as possible. Responding to the pressure, governments began establishing trial registries where companies and other trial organisers could announce clinical trials and recruit suitable patients as subjects. These were not compulsory, but drug firms found them useful.

However the defenders of scientific rigour in clinical testing found an opportunity of their own in trial registration: with most trials publicly registered, it became easier to identify those whose findings were never published (those unfavourable to a new product, for example). And it even became conceivable to require trial results to be posted in the same registries.

Leading research journals once again banded together to pressure the industry, refusing to publish clinical trials that were not registered at initiation, and requiring trial reports to list trial sponsors and payments to authors.

Ethical arguments were made that when patients agree to act as experimental subjects to advance medicine, those using them to conduct a trial are obliged to make all results available to medical science. Recent lawsuits have included the release of previously secret, unsuccessful trials as part of the settlement. Drug companies have protested every step of the way against publishing what they call commercial secrets.

We are now moving very close to government-mandated, universal release of all clinical trial data. What is the next stage in the century long battle over the clinical trial – as an instrument to constrain commerce, as a commercial tool in itself — we no doubt shall quickly learn. Whatever it is, we would be well advised to remember the ancient Greeks, who used the same word (pharmakon) to denote both poison and drug. For any drug strong enough to heal can also maim when used unwisely.

Click the links below for other articles in the package:

Abandoning clinical trial safeguards won’t boost local industry

Care and consent: the fraught ethics of international clinical trials

And from our archives:

Clinical trials are useful – here’s how we can ensure they stay so

What Australia should do to ensure research integrity

Register all trials, report all results – it’s long overdue

Remove industry bias from clinical trials before it’s too late