It is sometimes argued that small studies provide better evidence for reported effects because they are less likely to report findings with small and trivial effect sizes (Friston, 2012). But larger studies are actually better at protecting against inferences from trivial effect sizes, if researchers just make use of effect sizes and confidence intervals. Poor statistical power also comes at a cost of inflated proportion of false positive findings, less power to “confirm” true effects and bias in reported (inflated) effect sizes. Small studies (n = 16) lack the precision to reliably distinguish small and medium to large effect sizes (r < .50) from random noise (α = .05) that larger studies (n = 100) does with high level of confidence (r = .50, p = .00000012). The present paper presents the arguments needed for researchers to refute the claim that small low-powered studies have a higher degree of scientific evidence than large high-powered studies.