Abstract

Permutation tests are often presented in a rather casual manner, in both introductory and advanced statistics textbooks. The appeal of the cleverness of the procedure seems to replace the need for a rigorous argument that it produces valid hypothesis tests. The consequence of this educational failing has been a widespread belief in a “permutation principle”, which is supposed invariably to give tests that are valid by construction, under an absolute minimum of statistical assumptions. Several lines of argument are presented here to show that the permutation principle itself can be invalid, concentrating on the Fisher-Pitman permutation test for two means. A simple counterfactual example illustrates the general problem, and a slightly more elaborate counterfactual argument is used to explain why the main mathematical proof of the validity of permutation tests is mistaken. Two modifications of the permutation test are suggested to be valid in a very modest simulation. In instances where simulation software is readily available, investigating the validity of a specific permutation test can be done easily, requiring only a minimum understanding of statistical technicalities.

1. Introduction

Permutation tests are frequently recommended on two grounds. First, they require fewer assumptions than corresponding model-based tests, and secondly, their validity (as statistical hypothesis tests) is guaranteed by construction. The purpose of this note is to indicate a way in which the first property undermines the second property. The setting is the usual one in which the Mann-Whitney test is asserted to be appropriate, but going beyond this to the assertion of a more general permutation principle (Lehmann [1]).

Users of statistical methods appear to be of two minds about permutation tests. On one hand, since the “randomization” test in the context of a randomized clinical trial is an example of a permutation test, much of the argument in favor of randomization as an experimental principle has been that there is a guaranteed correct statistical test. This argument has had an enormous impact on the design of biomedical studies, with virtually all researchers agreed that randomization is necessary for a study to be valid. If a randomization test is invalid, however, then the technical argument for randomization becomes rather thin. On the other hand, formal permutation tests occur fairly rarely in statistical practice. In the two-sample situation discussed here, the dominant statistical method is the 𝑡 -test, and although randomization arguments have been used to justify it, it can also be recommended without mentioning randomization. Perhaps the only other analyses that are used with some frequency are the nonparametric alternatives to the 𝑡 -tests, and these are often justified by appealing to a “permutation principle”, giving the impression that these alternatives minimize untestable assumptions. In fact, the often-used Wilcoxon and Mann-Whitney tests do not follow from a permutation principle, but even if they did I will argue here that this principle is invalid.

First, I will present an exaggerated example that indicates in a practical sense how the logic of the general permutation test can break down. Secondly, using the notion of counterfactuals, I will show in a slightly more theoretical way what the source of the problem is. Thirdly, I will suggest two potential alternatives to the permutation test, and argue in their favor on the basis of a small simulation. Finally, I will indicate how Fisher’s exact test fails to follow counterfactual principles, based on an example.

2. Regression Example

The two-sample location problem can be expressed as a model for individual pairs ( 𝑦 , 𝑥 ) of the form 𝑦 = 𝜃 𝑥 + 𝑒 , where 𝑥 takes the values 1 and − 1 (indicating two groups), 𝑒 is a chance perturbation, having mean 0 and being uncorrelated with 𝑥 , and 𝜃 is the parameter of interest. In this paper, I will only consider the case in which half of the observations have 𝑥 = 1 , and so the total sample size 𝑛 is even. Given data of this form, the conventional estimate of 𝜃 is ̂ ∑ 𝜃 = 𝑥 𝑦 𝑛 . ( 2 . 1 )

The null hypothesis is that 𝜃 = 0 , and the permutation test of this hypothesis is based on the putative null distribution of the above estimate, obtained in the following way. Let ̃ 𝑥 denote a random permutation of the 𝑥 -values. For simulation purposes, ( 𝑦 , 𝑥 ) is replaced by ( 𝑦 , ̃ 𝑥 ) , a large number of random permutations ̃ 𝑥 are generated, with 𝑦 fixed, and for each one the corresponding estimate of 𝜃 is computed. Finally, this large sample of 𝜃 s is taken to estimate the null hypothesis distribution of ̂ 𝜃 .

A difficulty with this approach is illustrated in Figures 1 and 2. Figure 1(a) shows a unit Normal distribution, intended as the distribution of 𝑦 (or 𝑒 ) under the null hypothesis that 𝜃 = 0 . Half of the values from this distribution are regarded as being cases when 𝑥 = − 1 , the remainder when 𝑥 = 1 . Figure 1(b) shows the simulated permutation null distribution of the estimate of 𝜃 . Figure 2(a) shows a another distribution for 𝑦 (or 𝑒 ), again under the null hypothesis. Figure 2(b) shows the simulated permutation null distribution of the estimate of 𝜃 in this case. Both of these cases are correct, in the sense that if the distribution of 𝑒 is given on the left, then the null distribution of ̂ 𝜃 is validly approximated on the right.



(a)

(b)

(a)

(b)



(a)

(b)

(a)

(b)

Now let us look at the situation with different eyes. Suppose that Figure 1(a) shows the distribution of 𝑒 , but that 𝜃 is large enough to produce the distribution of 𝑦 seen in Figure 2(a). The distribution of the parameter estimate under the null hypothesis is still shown in Figure 1(b). But the permutation null distribution that will be computed from data coming from the distribution in Figure 2(a) appears in Figure 2(b). This shows an example of the following: in the situation where, as a matter of fact, the null hypothesis is false, the permutation estimate of the null distribution can also be false.

3. Counterfactuals

One way of understanding this situation is based on the method of counterfactuals (Rubin, [2]). In the two-sample circumstance, we would write the outcomes as 𝑦 ( 𝜃 , 𝜓 ) , considered as a stochastic process indexed by the parameter of interest ( 𝜃 ) and the nuisance parameter ( 𝜓 ) . Again, in our situation this would be 𝑦 ( 𝜃 , 𝜓 ) = 𝜃 𝑥 + 𝑦 ( 0 , 𝜓 ) , where now 𝑦 ( 0 , 𝜓 ) plays the role of 𝑒 above. In the exposition by Lehmann [1], the parameter 𝜓 would represent a probability density, and 𝑦 ( 0 , 𝜓 ) would have density 𝜓 . The value 𝑦 ( 0 , 𝜓 ) is generated by some chance mechanism, and then 𝑦 ( 𝜃 , 𝜓 ) is generated by the model equation above. The values we observe are 𝑦 ( 𝜃 , 𝜓 ) , but for values of 𝜃 and 𝜓 that we do not know. Counterfactuality comes in because 𝑦 ( 0 , 𝜓 ) is the value we would have observed if the null hypothesis had been true. As is usually the case in counterfactual arguments, we can only actually observe 𝑦 ( 0 , 𝜓 ) under special circumstances (when the null hypothesis is true), so that in general it is only a hypothetical quantity.

Here is the argument of Lehmann [1]. The set of values { 𝑦 ( 0 , 𝜓 ) } (equivalent to the order statistic) is complete and sufficient for the model defined by 𝜃 = 0 and 𝜓 = any density. It follows that the null conditional distribution of the 𝑠 𝑎 𝑚 𝑝 𝑙 𝑒 𝑦 ( 0 , 𝜓 ) given the order statistic { 𝑦 ( 0 , 𝜓 ) } is based on the permutation distribution, irrespective of the value of 𝜓 . This is the basis for the permutation test. In practice, however, it is not { 𝑦 ( 0 , 𝜓 ) } that is observed, but { 𝑦 ( 𝜃 , 𝜓 ) } . The conventional permutation distribution is based on the observed order statistic, but this is not the order statistic that would have been observed if the null hypothesis had been true. This is the core of the counterfactual objection to the permutation test; it conditions on the observed order statistic { 𝑦 ( 𝜃 , 𝜓 ) } instead of conditioning on the null order statistic { 𝑦 ( 0 , 𝜓 ) } , which Lehmann used for his proof. Because it is conditioning on the latter that justifies the test, the argument for conditioning on the former is not obviously correct.

In Lehmann’s theorem where the permutation test is derived, he implicitly considers only observations of the form 𝑦 ( 0 , 𝜓 ) , and shows that the (unbiased) rejection region 𝑅 must satisfy 𝑃 [ 𝑅 ∣ { 𝑦 ( 0 , 𝜓 ) } ] = 𝛼 , the level of the test. In his examples, he implies that 𝑅 can be found by solving 𝑃 [ 𝑅 ∣ { 𝑦 ( 𝜃 , 𝜓 ) } ] = 𝛼 . But since his theorem implicitly assumed the null hypothesis, he did not show that 𝑅 must satisfy this latter equation, and as we have seen in the example above, it need not be satisfied.

There is yet another way of seeing the problem, which connects it more meaningfully to the example. The observed order statistic { 𝑦 ( 𝜃 , 𝜓 ) } has the same distribution as an order statistic { 𝑦 ( 0 , 𝜓 0 ) } , where 𝜓 0 is a distribution that depends on 𝜃 and 𝜓 . The permutation distribution based on { 𝑦 ( 0 , 𝜓 0 ) } is indeed a possible null distribution, but it is not the null distribution that would have been obtained if 𝜃 had been 0, and 𝜓 had been left unchanged. Thus, it is precisely the fact that the permutation argument permits different values of ( 𝜃 , 𝜓 ) to produce the same order statistic that creates the problem.

Finally, there is a way of seeing how the sufficiency argument actually misleads the inference. The argument of Lehmann [1] is that conditional on the order statistic, the distribution of the sample does not depend on the nuisance parameter 𝜓 , in the submodel where 𝜃 = 0 . This is perhaps naturally interpreted to mean that the nuisance parameter has been eliminated from the problem. But this is not the correct interpretation of Lehmann’s equations. What they really say is that the only influence that the nuisance parameter 𝜓 has on the distribution of the sample is carried by the order statistic. The order statistic completely mediates the effect of 𝜓 on the distribution of the sample. Thus, both 𝜃 and 𝜓 jointly influence the permutation distribution of the sample, but the permutation test does not separate their individual influences.

4. Better Estimates of the Null Distribution

One obvious strategy to repair the permutation test is to replace the order statistic by ̂ { 𝑦 ( 𝜃 , 𝜓 ) − 𝜃 𝑥 } . This is an attempt to estimate the true null order statistic { 𝑦 ( 0 , 𝜓 ) } . The potential objection is that the variability of this estimate might be less than the variability of the true null order statistic, producing an invalid test. The permutation test based on this replacement is called the adjusted permutation test.

Another approach is to use a simulation based on a random permutation ̃ 𝑥 , but conditioned so that it is orthogonal to 𝑥 . The corresponding simulation sample of estimates are then of the form ∑ ̃ 𝑥 𝑦 ( 𝜃 , 𝜓 ) 𝑛 = ∑ ̃ 𝑥 𝑦 ( 0 , 𝜓 ) 𝑛 . ( 4 . 1 )

The simulated distribution of the estimate is thus based on order statistics that would have counterfactually been seen under the null hypothesis, but with a restriction on the permutations. It is not clear that this procedure is fully justified, because of the orthogonality restriction. Here we call this the orthogonal permutation test.

In the simulation, we call the usual permutation test the conventional permutation test. Because everything is based on a simulation, we know the true null order statistic, and the permutation test based on it is called the null permutation test. This gives us altogether four permutation tests, of which only the null permutation test is (within simulation variation) known to be correct.

5. A Simulation

To compare these four tests, I performed a small simulation in the above regression situation. The simulation used sample size 16, since the orthogonal permutation test is only possible for ̃ 𝑥 taking values − 1 and 1 if 𝑛 is divisible by 4. The distribution of 𝑒 was Normal (0,1), values of θ were 0(.1)1.3, the number of simulated tests for each value of the parameter was 1000, and the number of permutations in each test was 100. The simulation was carried out in Stata (version 9), using a permutation routine that was written specifically for this research.

The results of the simulation are shown in Tables 1–4. Table 1 shows the average estimate (each based on 1000 replications), verifying that all estimates of the null distribution have the correct mean of 0. For the null case this can be shown theoretically, and so the values here serve as a positive control on the simulations. The standard deviations of the estimates (SDE) are shown in Table 2. Again the null test values are constant, as they should be within the simulation variation. The orthogonal test has essentially the same SDEs as the null test, and the adjusted test is only slightly higher. In contrast, the conventional test estimates an SDE that grows with the size of the parameter, another result that can be verified theoretically (see the Appendix).

𝜃 Null Conventional Adjusted Orthogonal 0 − 0 . 0 0 0 2 7 − 0 . 0 0 0 2 7 − 0 . 0 0 0 2 1 − 0 . 0 0 0 0 3 0.1 − 0 . 0 0 0 0 7 − 0 . 0 0 0 0 6 0 . 0 0 0 0 2 0 . 0 0 0 1 0 0.2 − 0 . 0 0 0 0 5 − 0 . 0 0 0 1 7 0 . 0 0 0 0 5 − 0 . 0 0 0 4 9 0.3 0 . 0 0 0 2 2 0 . 0 0 0 2 2 0 . 0 0 0 2 6 − 0 . 0 0 0 2 4 0.4 0 . 0 0 0 2 1 0 . 0 0 0 2 7 0 . 0 0 0 2 1 0 . 0 0 0 3 3 0.5 0 . 0 0 0 5 1 0 . 0 0 0 4 7 0 . 0 0 0 4 8 − 0 . 0 0 0 0 9 0.6 − 0 . 0 0 0 1 5 − 0 . 0 0 0 0 5 − 0 . 0 0 0 0 5 − 0 . 0 0 0 3 8 0.7 0 . 0 0 0 5 0 0 . 0 0 0 5 3 0 . 0 0 0 4 6 0 . 0 0 0 1 8 0.8 − 0 . 0 0 0 2 5 − 0 . 0 0 0 3 5 − 0 . 0 0 0 3 3 0 . 0 0 0 0 7 0.9 − 0 . 0 0 0 6 6 − 0 . 0 0 0 7 0 − 0 . 0 0 0 6 0 − 0 . 0 0 0 3 8 1 − 0 . 0 0 0 0 7 − 0 . 0 0 0 3 0 − 0 . 0 0 0 0 5 0 . 0 0 0 0 4 1.1 − 0 . 0 0 0 1 5 0 . 0 0 0 3 2 − 0 . 0 0 0 0 7 0 . 0 0 0 0 6 1.2 0 . 0 0 0 1 3 − 0 . 0 0 0 1 2 0 . 0 0 0 1 2 − 0 . 0 0 0 0 9 1.3 0 . 0 0 0 1 4 − 0 . 0 0 0 3 5 0 . 0 0 0 1 1 − 0 . 0 0 0 1 9 Means of the sampling distributions of estimates of the null mean, using four procedures: ( 1 ) null, the usual method when the null is true, ( 2 ) conventional, the usual method when the null is false, ( 3 ) adjusted, subtracting off the estimate, ( 4 ) orthogonal, using only orthogonal permutations.



𝜃 Null Conventional Adjusted Orthogonal 0 0.246 0.246 0.237 0.246 0.1 0.246 0.247 0.237 0.246 0.2 0.246 0.251 0.237 0.246 0.3 0.246 0.257 0.237 0.246 0.4 0.246 0.265 0.237 0.246 0.5 0.246 0.276 0.237 0.246 0.6 0.246 0.288 0.237 0.246 0.7 0.246 0.302 0.237 0.246 0.8 0.247 0.318 0.238 0.246 0.9 0.246 0.335 0.238 0.246 1 0.246 0.353 0.237 0.246 1.1 0.246 0.372 0.237 0.246 1.2 0.247 0.392 0.238 0.246 1.3 0.246 0.412 0.237 0.246 Standard deviations of the estimators in Table 1.



𝜃 Null Conventional Adjusted Orthogonal 0 0.053 0.053 0.067 0.061 0.1 0.112 0.1 0.132 0.123 0.2 0.212 0.186 0.221 0.212 0.3 0.337 0.283 0.34 0.316 0.4 0.487 0.426 0.487 0.47 0.5 0.643 0.574 0.645 0.619 0.6 0.759 0.726 0.768 0.756 0.7 0.857 0.835 0.876 0.862 0.8 0.924 0.913 0.94 0.929 0.9 0.963 0.965 0.978 0.974 1 0.984 0.986 0.991 0.991 1.1 0.996 0.995 0.996 0.996 1.2 0.997 0.998 0.999 0.999 1.3 0.999 0.999 1 0.999 Powers of the tests based on the estimators in Table 1.



𝜃 Null Adjusted Orthogonal 0.1 0.992 0.921 0.989 0.2 0.964 0.895 0.961 0.3 0.919 0.853 0.915 0.4 0.861 0.801 0.859 0.5 0.798 0.741 0.793 0.6 0.73 0.678 0.73 0.7 0.664 0.617 0.661 0.8 0.601 0.558 0.597 0.9 0.541 0.503 0.538 1 0.487 0.452 0.485 1.1 0.437 0.406 0.436 1.2 0.396 0.368 0.394 1.3 0.358 0.332 0.356 Efficiency of the estimators in Table 1.



The probability of rejecting the null hypothesis is shown in Table 3. Again the results of the orthogonal test are essentially the same as the correct null test, with the adjusted test losing only a small amount of power. Although the conventional test is worse than both the adjusted and orthogonal tests, the difference is rather small. Note that the adjusted and orthogonal tests appear to have been slightly larger than nominal levels, suggesting that possibly some adjustment needs to be made. (In the simulations a value equal to the observed was included in the rejection region, and the number of permutations per test was small, both of which might account for some of the excess level, but more research is warranted.) Table 4 shows the efficiency of the conventional test relative to the others, in terms of sample size. This is an estimation-based rather than a test-based comparison. Clearly the conventional test fares poorly relative to the other tests, and the deficit grows with the size of the effect parameter.

6. Fisher’s Exact Test Example

A similar problem affects the exact test for 2 × 2 tables. When the margins of a 2 × 2 table are indeed fixed by the design of the experiment, then the permutation distribution may well make sense. To the contrary, the exact test has been advocated as a general testing procedure that is valid even when the margins are not fixed. The counterfactual approach can be portrayed in these cases by defining indicators as follows: 𝑏 𝑟 ( 𝑢 𝑟 , 𝛽 𝑟 , 𝛽 𝑐 , 𝛽 𝑟 𝑐 ) = ind ( 𝑢 𝑟 < 𝛽 𝑟 ) , 𝑏 𝑐 ( 𝑢 𝑐 , 𝛽 𝑟 , 𝛽 𝑐 , 𝛽 𝑟 𝑐 ) = ind ( 𝑢 𝑐 < 𝛽 𝑐 ) . 𝑏 𝑟 𝑐 ( 𝑢 𝑟 𝑐 , 𝛽 𝑟 , 𝛽 𝑐 , 𝛽 𝑟 𝑐 ) = ind ( 𝑢 𝑟 𝑐 < 𝛽 𝑟 𝑐 ) , where the 𝑢 ’s are independent uniform chance variables, and “ind” means “indicator variable of”. The row indicator is the larger of 𝑏 𝑟 and 𝑏 𝑟 𝑐 , and the column indicator is the larger of 𝑏 𝑐 and 𝑏 𝑟 𝑐 . The null hypothesis is independent of row and column, which is the same as 𝛽 𝑟 𝑐 = 0. The nuisance parameter is ( 𝛽 𝑟 , 𝛽 𝑐 ) .

Here is an illustrative example. Table 5 shows the result of a single simulation with 𝛽 𝑟 = 0.6, 𝛽 𝑐 = 0.6, and 𝛽 𝑟 𝑐 = 0.3. The one-sided 𝑃 -value from Fisher’s exact test is .517 (according to Stata). If we eliminate 𝑏 𝑟 𝑐 (counterfactually imposing the null hypothesis), then the results are as in Table 6. If we take it that there were 10 observations in the lower-right cell (as in Table 5), but with the margins of Table 6, the one-sided 𝑃 -value is .0124 (again from Stata). This example makes it clear that the failure of the null hypothesis has an effect on the conditioning statistic, which in this case consists of the marginal frequencies in the table. The conditioning in Fisher’s exact test is not on the frequencies that would have been seen under the null hypothesis with 𝛽 𝑟 𝑐 = 0 but 𝛽 𝑟 and 𝛽 𝑐 unchanged, but rather under a different version of the null hypothesis in which the actual nonnull value of 𝛽 𝑟 𝑐 implicitly gives different values to 𝛽 𝑟 and 𝛽 𝑐 . From the counterfactual standpoint, Fisher’s exact test uses the wrong margins.

1 4 5 5 10 15 6 14 20 Simulated data based on 𝑏 𝑟 , 𝑏 𝑐 , and 𝑏 𝑟 𝑐 , exhibiting dependence of row and column, but with 𝑃 = . 5 1 7



2 5 7 7 6 13 9 11 20 Data from the simulation in Table 5, but omitting the 𝑏 𝑟 𝑐 factor, thereby correctly estimating the null margins. 10 observations in the lower right cell would result in a 𝑃 -value of .0124.



7. Discussion

It has become increasingly common in the statistical and biomedical literature to see assertions that amount to a general “permutation principle”. The issue is whether two (or more) variables are related, and a test is performed based on an estimated null distribution, which is produced by permuting one (or more) of the variables while leaving the remainder fixed. In randomized trials, this is called the “randomization test”, and more recently “re-randomization” has also been used for it, even in situations where no original randomization has been performed. The impression is given that either no or very few assumptions are necessary for the correctness of this procedure.

For example, one now sees the justification for the Wilcoxon one-sample test as the application of this general permutation principle to the signed ranks of the observations, as if no further assumptions were required. The Wilcoxon test is, to the contrary, the consequence of a careful argument (Pratt and Gibbon [3]) that requires the symmetry of the underlying distribution, the disregard of which has been noted for some time (Kruskal [4]). It is known that the Wilcoxon test can be invalid in the presence of asymmetry; that is, the test detects the asymmetry rather than a departure from the null hypothesis, when the null hypothesis regarding the mean is true. Thus it is a specific argument based on symmetry and not a general permutation principle that justifies the Wilcoxon test. Similar comments apply to the Mann-Whitney test. There have been some cautionary articles about the general validity of the permutation principle (Romano [5], Hayes [6], Zimmerman [7], Lu et al. [8], Zimmerman [9], Modarres et al. [10]), but the dominant statistical thinking has been to ignore the cautions.

Based on the counterfactual argument and simulation of a regression case presented here, it seems warranted to say that permutation tests need to be revisited. Despite the fact that permutation tests are not widely used in practice, there is a very large literature on them, and several books that explain in detail how they can be used in a wide variety of situations. This literature is attractive because it seems to offer valid statistical procedures, even in complex cases, and in cases where there are technical barriers to obtaining theoretical results. Indeed, when a permutation principle does apply to a specific situation, the argument in favor of using it seems considerable, due to the reduction of untestable assumptions. But when the permutation principle fails, then there is a risk of raising false confidence in an unreliable procedure, with obvious negative consequences. The conclusion is that a substantial amount of new research is required to distinguish between valid and invalid permutation tests, and potentially also to devise modifications of the generally recommended tests that would be appropriate in practice. In any case, it should be recognized that the general assertion of a permutation principle that automatically produces valid tests appears itself to be invalid.

Appendix

The purpose of this appendix is to derive the standard deviation of the permutation estimate of a regression parameter. Let 𝑦 𝑖 ( 𝑖 = 1 , 2 , … , 𝑛 = 2 𝑚 ) be any collection of numbers. Let vector 𝑥 be chosen at random from among the ordered lists of 𝑛 values +1 and − 1 , with half being +1 and half being − 1 . It is obvious that 𝐸   𝑖 𝑥 𝑖 𝑦 𝑖  = 0 , ( A . 1 ) where the expectation is taken with respect to the distribution of the 𝑥 𝑖 ’s. By algebra,   𝑖 𝑥 𝑖 𝑦 𝑖  2 =  𝑖 𝑦 2 𝑖 +  𝑖 ≠ 𝑗 𝑥 𝑖 𝑥 𝑗 𝑦 𝑖 𝑦 𝑗 . ( A . 2 ) Employing an elementary combinatorial argument  𝑥 𝑝 𝑟 𝑖 𝑥 𝑗   𝑥 = 1 = 𝑝 𝑟 𝑖 = 𝑥 𝑗   = 2 2 𝑚 − 2 𝑚 − 2   𝑚 2 𝑚  = 𝑚 − 1 2 𝑚 − 1 ( A . 3 ) from which it follows immediately that  𝑥 𝑝 𝑟 𝑖 𝑥 𝑗  = 𝑚 = − 1 2 𝑚 − 1 . ( A . 4 ) Thus 𝐸   𝑖 ≠ 𝑗 𝑥 𝑖 𝑥 𝑗 𝑦 𝑖 𝑦 𝑗  =  𝑚 − 1 − 𝑚 2 𝑚 − 1   2 𝑚 − 1 𝑖 ≠ 𝑗 𝑦 𝑖 𝑦 𝑗 1 = −  2 𝑚 − 1 𝑖 ≠ 𝑗 𝑦 𝑖 𝑦 𝑗 ( A . 5 ) which after a few additional manipulations gives 𝐸   ∑ 𝑖 𝑥 𝑖 𝑦 𝑖 𝑛  2  = 1 𝑛 ∑ 𝑖  𝑦 𝑖 − 𝑦  2 𝑛 − 1 . ( A . 6 )

The import of this result for permutation inference for a regression parameter is the following. If the model 𝑦 𝑖 = 𝜃 𝑥 𝑖 + 𝑒 𝑖 holds, then the last equation above shows that the variance of the permutation distribution of the regression estimator depends on 𝜃 . Consequently, for the observed values of 𝑦 𝑖 the permutation distribution of the regression estimator has the correct variance if and only if 𝜃 = 0 , that is, if and only if the null hypothesis is in fact true.

Acknowledgment

This research was supported by Grant AT001906 from the National Institutes of Health.