LET me tell you about the perfect investment offer. Each week you will receive a share recommendation from a fund manager, telling you whether the stock’s price will rise or fall over the next week. After ten weeks, if all the recommendations are proved right, then you should be more than willing to hand over your money for investment. After all, there will be just a one-in-a-thousand chance that the result is down to luck.

Alas, this is a well-known scam. The promoter sends out 100,000 e-mails, picking a stock at random. Half the recipients are told that the stock will rise; half that it will fall. After the first week, the 50,000 who received the successful recommendation will get a second e-mail; those that received the wrong information will be dropped from the list. And so on for ten weeks. At the end of the period, just by the law of averages, there should be 98 punters convinced of the manager’s genius and ready to entrust their savings.

As a paper published last year in the Journal of Portfolio Management argued, this is a classic example of the misuse of statistics. Conduct enough tests on a bunch of data—run through half a million genetic sequences to find a link with a disease, for example—and there will be many sequences that appear meaningful. But most will be the result of chance.

This is a problem that has dogged scientists across many disciplines. There is a natural bias in favour of reporting statistically significant results—that a drug cures a disease, for example, or that a chemical causes cancer. Such results are more likely to be published in academic journals and to make the newspaper headlines. But when other scientists try to replicate the results, the link disappears because the initial result was a random outlier. The debunking studies, naturally, tend to be less well reported.

Faced with this problem, scientists have turned to tougher statistical tests. When searching for a subatomic particle called the Higgs Boson, they decided that to prove its existence, the results had to be five standard deviations from normal—a one-in-3.5-million chance.

Financial research is highly prone to statistical distortion. Academics have the choice of many thousands of stocks, bonds and currencies being traded across dozens of countries, complete with decades’ worth of daily price data. They can backtest thousands of correlations to find a few that appear to offer profitable strategies.

The paper points out that most financial research applies a two-standard-deviation (or “two sigma” in the jargon) test to see if the results are statistically significant. This is not rigorous enough.

One way round this problem is to use “out-of-sample” testing. If you have 20 years of data, then split them in half. If a strategy works in the first half of the data, see if it also does so in the second out-of-sample period. If not, it is probably a fluke.

The problem with out-of-sample testing is that researchers know what happened in the past, and may have designed their strategies accordingly: consciously avoiding bank stocks in 2007 and 2008, for example. In addition, slicing up the data means fewer observations, making it more difficult to discover relationships that are truly statistically significant.

Campbell Harvey, one of the report’s authors, says that the only true out-of-sample approach is to ignore the past and see whether the strategy works in future. But few investors or fund managers have the required patience. They want a winning strategy now, not in five years’ time.

The authors’ conclusions are stark. “Most of the empirical research in finance, whether published in academic journals or put into production as an active trading strategy by an investment manager, is likely false. This implies that half the financial products (promising outperformance) that companies are selling to clients are false.”

For the academics, the lesson is simple. Much more rigorous analysis will be needed in future to reduce the number of “false positives” in the data. As for clients of the investment industry, they need to be much more sceptical about the brilliant trading strategies that fund managers try to sell them.

All this will leave many readers wondering how to invest their savings. That’s fine. Buttonwood has an investment strategy that is sure to boost your wealth. Just send your e-mail address and a stock tip will arrive every month...

Economist.com/blogs/buttonwood

* “Evaluating Trading Strategies”, by C. Harvey and Y. Liu, Journal of Portfolio Management (2014)