Nothing can cure significance testing. Except a bullet to the p-value.

(That sound you heard was from readers pretending to swoon.)

The paper is out and official—and free!: “Manipulating the Alpha Level Cannot Cure Significance Testing“. I am one (and a minor one) of the—Count ’em!—fifty-eight authors.

We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable.

My friends, this is peer-reviewed, therefore according to everything we hear from our betters, you have no choice but to believe each and every word. Criticizing the work makes you a science denier. You will also be reported to the relevant authorities for your attitude if you dare cast any doubt.

I mean it. Peer review is everything, a guarantor of truth. Is it not?

Or do we allow the possibility of error? And, if we do, if we are allowed to question this article, are we not allowed to question every article? That sounds mighty close to Science heresy, so we’ll leave off and concentrate on the paper.

Now I am with my co-authors a lot of the way. Except, as regular readers know, I would impose my belief that null hypothesis significance testing be banished forevermore. Just as the “There is some good in p-values if properly used” folks would impose their belief that there is some good in p-values. Which there is not.

Another matter is “effect size”, which almost always means a statement about a point estimate of a parameter inside an ad hoc model. These are not plain-English effect sizes, which implies causality. How much effect x has on y. But statistical models can’t tell you that. They can, when used in a predictive sense, say how much the uncertainty of y changes when x does. So “effect size” is, or should, be thought of in an entirely probabilistic way.

The conclusion we can all agree with:

It seems appropriate to conclude with the basic issue that has been with us from the beginning. Should p-values and p-value thresholds, or any other statistical tool, be used as the main criterion for making publication decisions, or decisions on accepting or rejecting hypotheses? The mere fact that researchers are concerned with replication, however it is conceptualized, indicates an appreciation that single studies are rarely definitive and rarely justify a final decision. When evaluating the strength of the evidence, sophisticated researchers consider, in an admittedly subjective way, theoretical considerations such as scope, explanatory breadth, and predictive power; the worth of the auxiliary assumptions connecting nonobservational terms in theories to observational terms in empirical hypotheses; the strength of the experimental design; and implications for applications. To boil all this down to a binary decision based on a p-value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable.

Bonus Disguising p-values as “magnitude-based inference” won’t help, either, as this amusing story details. Gist: some guys tout massaged p-values as innovation, are exposed as silly frauds, and cry victim, a cry which convinces some.

Moral: The best probability is probability, and not some ad hoc conflation of probability with decision, which is what all “hypothesis tests” are.

Share this: Facebook

Reddit

Twitter

Pinterest

Email

More

Tumblr

LinkedIn



WhatsApp

Print



