By Harrison Chase

Italian soccer is infamously riddled with corruption, with multiple scandals over the past decade. Although various forms of match fixing have occurred, the match fixing described in Joe McGinniss’s book The Miracle of Castel Di Sangro particularly caught my eye. McGinniss recounts how, on the final weekend of the Serie B season, Castel Di Sangro, who had already secured a position above relegation but below promotion, purposely lost a game to Bari, who needed a win to ensure promotion. This incident of match fixing was not an isolated case – multiple clubs were found guilty of it 2011, and others are still undergoing investigation. Although there are many other forms of match fixing/corruption that have been admitted to, this particular type of match fixing piqued my interest because it is remarkably similar to a famous study on sumo wrestling in Freakonomics. The Freakonomics study investigated whether on the final day of a tournament sumo wrestlers who already secured their ranking would let their opponent win, provided that their opponent was 7-7 and needed the final win to ensure promotion.

In order to conduct a similar analysis to that in Freakonomics, I first needed to identify games in which Serie B clubs might have an incentive to throw the match. To do this, I used the historical data found here. I then wrote a program that went through the matches and determined, based on the rules governing Serie B, whether, on day of a match, a team had assured themselves of a particular position. For example, in the most recent season, the rules regarding promotion/relegation were:

The top two teams were automatically promoted to Serie A (I further distinguished between first and second place, since teams would logically value winning the league)

The third place team was promoted automatically if it finished 10 points above the fourth place team

If the third place team was less than 10 points above the fourth place team, then there was a tournament for the last promotion spot with teams who finished within 14 points of the third place team (up to six teams)

The bottom three teams are automatically relegated

The fourth-to-last team is automatically relegated if it finishes 5 or more points behind the fifth-to-last team

If the fifth-to-last team is less than five points above the fourth-to-last team, then they will two deciding matches to determine who gets relegated

These rules varied from season to season, but the point is that it is always mathematically possible to determine when a team has assured themselves of a spot, either of being promoted, relegated, or just finishing right in the middle. I did this analysis for the past 11 seasons (I would have gone back more but the data was getting a little sparse). Thus, the period studied ranged from the 2004-05 season to the 2014-2015 season. Once I had identified the 141 games during which one team had secured their position and the other hadn’t, I was able to look at how the team who hadn’t secured their position had done. Over that time period, these teams amassed a record of 86-20-35, but that wasn’t what I was interested in. Looking at pure W-L-D records wouldn’t be terribly insightful, as it would not only fail to account for the relative strength the teams that were playing but it also wouldn’t account for the possibility that teams still fighting for their spots might try harder than teams with nothing to play for. To address these two concerns, I instead looked at the betting lines that were included in the data.

I looked at an average of the betting lines from Bet365, Blue Square, William Hill, Interwetten, Stan James and Gamebookers. I chose to use the average of these lines not only to eliminate any individual biases but also because not every oddsmaker had a line for every game. In fact, there were some games for which none of these oddsmakers had a line (the more distant seasons had less info, which is why I only went 11 years back) but 132 of the 141 games did. For these games, I calculated the expected points that the lines were implying for each team. I then compared the expected points of the team still fighting for position against their actual points, and found that over the course of those 132 games those teams got 38.6 more points than expected, or 0.29 points more per game. But was that a significant difference?

If the betting lines were spot on, we would expect to see a negligible difference (in fact, the difference between the expected points and actual points per game of teams in Serie B over that time period was -0.0047 per game, basically zero). I thought about running a t-test, with the null hypothesis being that the difference was zero and the alternative being that it was greater than zero (as we want to see if teams that have something to play for do better), but the distribution of the differences wasn’t close to normal. Because the actual points achieved by a team in game is discretely distributed (0,1,3) while the expected points achieved is essentially continuously distributed between zero and three, when we combine them, we get a basically bimodal distribution, although not symmetrically distributed around zero. For example, here is the histogram of the difference between actual and expected points for all teams in not only Serie B but also Serie A, EPL, Championship, La Liga, Segunda Division, Bundesliga and 2.Bundesliga (the leagues I would later be looking at) over that time period.

Clearly, this is not normally distributed (nor evenly distributed around 0, buts that’s a topic for another day), so that violates the normality assumption required for t-tests. Instead, I ran a one sided two-sample permutation test using the above distribution as the reference distribution to test whether the team with nothing to play for performed significantly better than expected (I ran a one sided test because I am only interested in the effect in that direction). This test revealed that the difference is very significant, producing a p-value of 0.00266.

Now, as I see it, there are two possible explanations for this low p-value. One is that there has been significant match-fixing at the end of the season in games where the result does not matter. The other is that for whatever reason the betting lines are off, the most likely cause being that oddsmakers have failed to account for the superior desire to win of the team who still has something to fight for. To try to prove that it is the first reason and not the second, I conducted the same analysis on seven separate leagues for the same time period: Serie A, the EPL, the Football League Championship, the Bundesliga, the 2. Bundesliga, La Liga and Segunda Division (the top two leagues in Italy, England, Germany and Spain respectively). I conducted the exact same analysis on these leagues. The table below displays the eight leagues, along with the number of games over the past eleven years that meet the requirements (and have betting line data), sorted by the p-value produced by the permutation test, going from most to least significant.

As you can see, Serie B is the only league where there is a clearly significant difference. Clearly, the betting lines for the other leagues are all pretty much on point. Furthermore, the p-value is so small that it is still significant even after adjusting the critical region for the p-value to 0.00625 to account for multiple comparisons (which is necessary because when looking at eight leagues there is a greater chance one may show up as corrupt by random chance alone). This strongly suggests that there is significant match-fixing going on at the end of the Serie B season. Although this isn’t breaking news, as multiple arrests have been made on this issue, it is still cool to confirm it with statistics.

Besides just confirming the presence of match-fixing, we can also use the data in two other ways. First, we can take a look at which teams are associated with the largest differences in actual vs. expected points. For this, I combined the data from Serie A and Serie B, because due to promotion and relegation many teams played in both of the leagues over the course of the years. The table below lists the teams that are associated with the largest difference in expected vs actual points, with more than six games where one team had nothing to play for.

As you can see, these rankings do a fairly good job of identifying teams that were involved in match-fixing, with teams like Bari and Siena being some of the supposed ringleaders. Although these are punishments for all types of match fixing, not just the one I tried to measure, it does appear that this method is fairly good at identifying teams that have been proven to cheat in the past. The only team on this list without a punishment is Roma, although they were also suspected in the large 2006 match fixing scandal.

Besides grouping the corruption by team, we also group it by year. Doing so gives us this plot of the average difference in actual and expected points for the team with nothing to play for over the past eleven years:

As you can see, there is a sharp dropoff after the 2011-2012 season, which happens to be the season after which the second part of the punishments from the 2011-2012 Italian soccer scandal were handed down. It appears that the increased scrutiny from this scandal has lead to less corruption over the past few years as Italian soccer has been cleaned up (although it is not completely clean). In fact, over the past three years, the corruption in Serie B, as measured by the permutation test done earlier in this post, is not statistically significant. Although sample size may explain some of that, with only 44 games meeting the criteria, three other leagues were more statistically corrupt, including one (Bundesliga) that came in with a p-value of 0.02.

This study attempted to statistically find corruption in Serie B, which I think it was successful in doing. It also correctly identified teams that had been implicated for corruption in the past, and suggests that increased scrutiny has lead to less corruption over the past few years. As a final reminder, it is important to note that type of corruption I was seeking is far from the only type of corruption – it is just the easiest to identify. That is why some of the big names caught up in the 2006 Italian soccer scandal (Juventus) were not mentioned in this post – their (allegedly) preferred type of corruption involved bribing refs.

As for how to improve this study, the main thing would be more data. With data going back further than just the 2004-2005 season, we could get an even more concrete idea of which teams were involved in this form corruption. Also, it would be interesting to do this with other leagues, both at lower levels (Lega Pro or Serie D) and in other countries (Brazil). Ideally, this method could be used not to retroactively find corruption, but identify it (to some degree of certainty) and help the authorities.

The data used for this post can be found here.

Share this: Twitter

Facebook

Reddit

LinkedIn

Google



Like this: Like Loading...