The US Food and Drug Administration (FDA) is responsible for approving drugs for medical uses, and the agency has developed a set of expectations for using results from randomized clinical studies to determine (with varying degrees of success) whether a drug is safe and effective. But advances in materials science and miniaturization have led to an explosion in the use of medical implants, which do everything from acting as a replacement for balky knees to restarting arrhythmic hearts. Two new evaluations of the clinical studies used during the implant approval process suggests that the approval process for implants isn't nearly as rigorous as it might be.

The significance of the FDA's approval is made clear by the authors of one of the evaluations, which was published yesterday by the Journal of the American Medical Association. As is the case with drugs, many physicians view the FDA's acceptance as an indication that a device is safe and effective, as do many insurance companies. The makers of the implants, for their part, often view approval as a sign that it's safe to begin a direct-to-consumer advertising campaign for their product.

And, to a certain degree, the implant makers are right. Last year, the Supreme Court ruled that FDA approval of a medical device prevents consumers from suing based on claims that the device is poorly designed or unsafe. In short, FDA approval confers an important validation on an implant.

Despite its centrality, the agency has only been in the implant approval business since the 1970s. The rate of implant development has also increased dramatically during the last few decades, during which time the agency has revamped its drug approval process and dealt with some high-profile cases of political and industry interference. Thus, an evaluation of its procedures for implant approval would seem timely, which may explain why two such evaluations are being released at the same time.

The JAMA study is being joined by one sponsored by the FDA itself, which will appear in the American Journal of Therapeutics. Both look at clinical studies that accompanied the Premarket Approval (PMA) submissions from various implant makers that occurred between 2001 and 2008, focusing on those devices that can easily be considered the most critical: cardiac implants such as stents, defibrillators, ventricular assist devices, etc. The two cover a variety of similar measures of the scientific rigor of these studies, but come to some significantly different conclusions about the significance.

So, for example, the JAMA study highlights how over half of the PMAs were supported by only a single clinical study. Only a quarter of these were randomized, and less than 15 percent were blinded, meaning those taking part in the study didn't know who was receiving the new technology. Only about half included a control population. For drug approval, a randomized, double-blind trial has become the gold standard, so it might be a bit surprising that so few of these fit that description. However, the authors of the JAMA study recognize that blinding people to a physical implant can be difficult and, for some of the devices studied, completely impossible.

The FDA study considers this to be such a significant issue that it doesn't even bother to try to evaluate whether a study was blinded. Its authors also see little problem with devices being supported by only a single study, as they point out that implants are redesigned at a high rate (they cite a typical lifetime of less than 18 months on the market), and many of the PMAs were simply for a revision to a previously approved device. Instead, the authors combined a variety of factors—well-defined end points, sufficient subject population, etc.—into an overall quality score.

Another area of disagreement is the use of proxies for health outcomes. So, for example, a stent's effectiveness might be measured in terms of blood flow, which serves as a proxy for improved cardiac function. Again, the JAMA study considers this a problem, while the FDA-backed one does not. The former also highlights the use of training periods, in which the medical staff learns procedures for use of a device, as a potential source of bias, with trials moving to the experimental period only when things are going well.

Despite all these differences, there is some significant overlap between the conclusions. The FDA study suggests that about 18 percent of studies lacked a high-quality assessment of implant effectiveness, and about 40 percent were lacking when it came to safety. All told, nearly half of the studies lacked one or the other. The JAMA study suggested end-point evaluation was also lacking, although its figures weren't directly comparable.

Both studies highlighted problems with tracking the patients enrolled in these studies, and a lack of detailed data on sex and ethnicity among the patients. The trials are supposed to include a population that's reflective of the US, and the lack of data makes it impossible to tell whether that's the case. In fact, the JAMA paper's authors found it impossible to tell whether a number of the studies even included any patients within the US (many clinical trials are now taking place overseas, which may make finding a population that mirrors the US even more difficult).

The authors of the FDA-backed study also point out that many of the trials completely lacked data on relevant risks associated with cardiac disease, such as hypertension, smoking, and diabetes, which makes the data difficult to interpret.

A number of reports, such as one in The New York Times, suggest that the FDA may promote its own evaluation at the expense of the independent one. The substantial agreement between the two reports, however, suggests that there's a real problem with obtaining clinical data of sufficient quality for a thorough evaluation of cardiac medical implants, and quite probably medical implants in general. If the FDA intends to improve the situation, the common conclusions of the two reports would give that effort added weight.

Even the differences between the two reports seem informative. If the relevant experts can't agree on whether the standards that apply to clinical trials of drugs can or should be applied consistently to medical implants, then that would seem to signal that it's time to develop standards that are appropriate for trials of this class of medical device.

American Journal of Therapeutics, 2009. Publication in progress.

JAMA, 2009. Vol 302, No. 24. DOI unavailable.