When it comes to scientific research, it's important to view it with a critical eye. My Weightology Weekly subscribers know that I offer a critical analysis of every study that I cover each week. Scientific research is never perfect, and every study has limitations. Viewing research with a critical eye not only helps you determine how the results of a study might apply to you, but also helps you assess where a study's results fits into the grand scheme of things and even if it can be trusted at all. Scientific criticism is a valuable tool in the advancement of research, but it can also be invalid, misguided, based on erroneous information or misunderstandings, and/or subject to the personal biases of the individual making the criticisms. Unfortunately, Dr. Ralph Carpinelli's recent critique of my meta-analysis on training volume and strength falls into the latter category.

Some Background on Carpinelli

Dr. Carpinelli is a professor at Adelphi University in New York, and has written many review papers that are highly critical of popular concepts in the field of resistance training, as well as critical of scientific organizations like the American College of Sports Medicine and National Strength and Conditioning Association. In fact, Dr. Carpinelli has made sort of a career out of writing critical reviews. It appears that he has not published any original research since 2003, and has not been published in a PubMed-indexed journal since 2004. The vast majority of his papers have been qualitative, non-systematic reviews. Dr. Carpinelli's views can be considered similar to that of High Intensity Training (H.I.T.) advocates; these views include but are not limited to:

Single sets of resistance exercise to failure result in similar strength and muscle mass gains as multiple sets

Periodization does not result in greater strength gains compared to non-periodized training

Motor unit recruitment and strength gains are similar whether you use heavy weights or light weights, as long as you train to momentary muscular failure

The problem with qualitative, non-systematic reviews is that they are subject to the biases of the author, much more so than quantitative reviews. In fact, I wrote my own critical analysis of Carpinelli's reviews for Alan Aragon's Research Review as well as for my own Weightology Weekly subscribers. I found that Carpinelli was guilty of the same things that he accused other researchers of, including leaving out research that did not conform to his views, misinterpreting and/or mis-referencing studies, and making claims that lacked scientific support. Unfortunately, Dr. Carpinelli commits many of the same errors in his recent review of my meta analysis.

My Meta-Analysis

In 1998, Carpinelli published a review paper noting that the vast majority of studies that compared single sets to multiple sets found no significant differences in strength or muscle mass gains between the two protocols. Since that time, at least 10 studies have emerged showing superior strength gains with multiple sets, although Carpinelli does not seem to acknowledge these studies. Another issue that Carpinelli has never addressed is the one of statistical power. In scientific research, the number of subjects you have determines how small of an effect you can detect when comparing two groups. The more variable the effect is from one person to the next, the more subjects you need to detect a significant difference between groups. When it comes to strength training, people vary dramatically in how they respond to a training program. One person may have dramatic strength gains, while another may have minimal gains. Many strength training studies have very small subject numbers, and thus many studies that compare single sets to multiple sets will fail to detect significant differences simply due to inadequate sample sizes. In statistics, the failure to observe a significant difference when one really exists is known as a type II error or false negative. In over a decade of critical reviews, Carpinelli has never brought up the issue of sample size or false negatives when it comes to resistance training studies.

This is where a meta-analysis comes in. A meta-analysis is a "study of studies", where you aggregate the results of numerous studies to get an idea of the overall trend among a large body of research. Meta-analyses, when properly done, can help get a general sense of direction among a large body of studies with small sample sizes. In 2002 and 2003, Dr. Matthew Rhea published some meta-analyses looking at the effects of the number of sets on strength, and found multiple sets to be associated with superior strength gains. A separate group of authors published another meta-analysis which also showed multiple sets to be superior. In response, Carpinelli published a highly critical review of these 3 papers. I agreed with about 85% of the criticisms Carpinelli made of these papers; these meta-analyses suffered from a large number of shortcomings which made their conclusions questionable. However, Carpinelli never made any effort to improve upon the shortcomings of these papers and provide his own work. In fact, with all of his critical reviews, Carpinelli has not offered any original work to support his own views. Richard Berger, in response to a Carpinelli critique of Berger's work in the 1960's, stated:

I would suggest to Dr Carpinelli that he conduct research of his own in the hope of gaining support for his position. If his zealousness, which is commendable, were redirected to research rather than to critiquing old studies, his academic contributions would be more fruitful.

Since I agreed with many of the criticisms that Dr. Carpinelli made of the Rhea and Wolfe meta-analyses, I set out to improve upon the limitations of those papers and perform my own meta-analysis. Having published a meta-analysis in the past, and having learned from a biostatistician who was an expert in meta-analyses, I had the requisite background to do such a paper. I set out to improve upon the problems with the previous meta-analyses; some of these improvements included:

Using strict, predefined inclusion and exclusion criteria. The papers by Rhea and Wolfe had very loosely defined criteria.

Including only studies where the only difference between groups was the number of sets; all other variables had to be identical between the groups. Rhea and Wolfe included studies where groups differed on more characteristics than just the number of sets, which can introduce confounding variables.

Addressing the problem of outlier studies by doing a sensitivity analysis, where studies are removed one at a time to determine their influence on the final outcomes.

Using a special statistical model that addresses the unique problem of analyzing a group of studies; namely, you have to be able to simultaneously model the variation between treatment groups within each study (i.e., between the single and multiple set groups), as well as model the variation between different studies (i.e., study A will have different results from study B). You also need to account for the fact that you can't account for everything; there are many reasons why study A might have different results from study B, and you can't account for all those reasons, so you use a special statistical model (the technical term is a random effects model) that helps you deal with that. In fact, the statistical model I used makes it harder to see significant effects when compared to the typical statistics you see in most scientific papers.

My analysis was eventually published in the Journal of Strength and Conditioning Research, and, like Rhea and Wolfe, found multiple sets to be associated with superior strength gains to single sets. As you might expect, Dr. Carpinelli has now followed up with another critical review, a critical review that fails to H.I.T. its target on many levels. This review was published in an obscure Romanian sports science journal called Medicina Sportiva, a journal that is not indexed by PubMed. I had originally intended to respond directly to the journal, but I got no response when I emailed the editor requesting to write a reply. It is unfortunate that a journal allows publication of such a poor critique, and also does not allow the author of the original analysis to respond. Thus, I am writing my response here on my website for all to see. Because of the length of my response, I am going to present it in sections. I warn you right now that it is going to be somewhat academic and technical at times, so just look for the main points if you can't stomach that type of writing. I will also write a final post that will summarize all of the main points in an easy to understand manner.

OK, let's get to it. Click here to read part 2...