INFO

What Squiggles are • How to play through a season • Prediction • FAQ • The Model • Flagpole

What Squiggles are Squiggles are visualizations of AFL team performances, charting attack and defence over the course of a season. Teams ⇡ high on the chart kick big scores. Teams on the ⇢ right keep their opposition to low scores. Teams in the ↗ upper-right do both. Teams in the ↙ lower-left do neither. Squiggles are handy because: They visualize different game styles. For example, Ross Lyon teams at their peak lurk on the right side of the chart about halfway up, since they keep their opposition to low scores without scoring highly themselves.

They provide interesting season replays, showing how teams rose or fell at different times of the year.

They minimize fixture bias by accounting for the difficulty of opponents (and venue).

They predict how the season will play out based on current likelihoods. Squiggle is roughly as accurate as tipping the favourite every game. (Which is hard to beat!) Over a season, an informed, observant human should be able to beat it, but not by much. It will beat an average human tipper. You can review its accuracy by visiting the TIPS section of any year.

How to play through a season To watch the evolution of a past season, use the top controls: Rewind to the start of the season Previous Round Next Round Reload You can also: Click a team name in the legend at the top to hide/show it.

Click a team flag to remove every other team. For example, you might like to rewind, click Hawthorn's flag to remove all other teams, click Geelong's name in the legend to add it back in, then repeatedly step through the season to watch their dance of death. To zoom in on an area, drag a box around it.

Prediction Weekly Tips Click TIPS at the top of the page. If a team beats the tipped scoreline—i.e. wins by more than predicted, loses by less than predicted, or records an upset win—it will generally move in a positive direction on the chart (i.e. more up-and-right than down-and-left), while if its result is worse, it will generally move in a negative direction. You can view tips for previous rounds via the "History" link on the Tips page. Season Predictor This is how the ladder will look if Squiggle has correctly rated every team and nobody gets better or worse. For the home & away season, it uses a probabilistic ladder, not a simple tally of tips. Both teams are awarded a win probability from each game, so that if Squiggle thinks Hawthorn is 68% likely to beat Collingwood, it will award the Hawks 0.68 wins and the Pies 0.32 wins, increasing both team's tally of "probable wins" by less than 1. This is because if a team plays 10 games with 60% likelihood of winning each game, we should expect them to win about 6/10—not, as we would get if we tipped each game and tallied up the tips, 10/10. We know that upsets will happen; we just don't know when. A probabilitistic ladder accounts for the likelihood that teams will sometimes unexpectedly win or lose, even though we doesn't know when. This can look like a bug in the predictor, if you see a team tipped to win a match that doesn't seem to be credited. For example, a team might be on "15 (14.7)" wins, which means 14.7 "probable wins" rounded off to 15. (Rounding occurs so that teams can be secondarily ranked by their percentage.) And then that team is tipped to win the following week, but it remains on 15 wins, now "15 (15.3)". What has happened is the number of probable wins hasn't risen by enough to be rounded to a higher number. It has earned 0.6 more probable wins, but this still rounds off to 15. The predictor is saying it's still most likely this team will be on 15 wins, after accounting for the likelihood that some of its tips will be wrong. Finals matches are predicted using simple tips. However, this isn't a very reliable way of doing it, and not Squiggle's official Premier tip. For this, please see Flagpole. Starting the season: Team starting positions are heavily influenced by their late-season performances the previous year, and the off-season is completely ignored. There is no adjustment made for recovery from injuries, or players gained or lost via the draft or trade table, or anything else. For example, Collingwood started 2015 rated very low due to their injury-plagued end to 2014, while Adelaide and West Coast started in good positions after solid late-2014 performances. Interactive Season Predictor Drag teams around the chart and make Squiggle predict the rest of the season based on the new positions! It's the best of both worlds: your footy insight plus Squiggle's ability to sensibly model a season. Reposition teams to your heart's content, open up the Predictor and click RECALCULATE. This also provides a shareable link to the generated squiggle, so you can show off your work to other people.

FAQ What causes a team to move? Teams move when they do better or worse than Squiggle expected. The most important factor is the final scoreline. When a team scores more than Squiggle expected, they move ⇡ up; when they score less, they move move ⇣ down; when they hold their opposition to a lower score, they move ⇢; right; and when they allow their opposition to score more, they move ⇠ left. Of course, usually two of these things happen at once, so they move on a diagonal: ↗ Scored more than predicted, held opponent to less than predicted ↘ Scored less than predicted, held opponent to less than predicted ↙ Scored less than predicted, opponent scored more than predicted ↖ Scored more than predicted, opponent scored more than predicted How far a team moves depends on how different the result was from Squiggle's prediction. If the result was close to expectation, a team may barely budge. But an unexpected thrashing will cause a lot of movement. Do teams get more movement against easy opposition? No, because Squiggle expects better performances against weaker opponents, and to move to a better position, the team has to beat this expectation. For the same reason, Squiggle isn't affected by fixture bias. Can a team lose and still move into a better squiggle position? Yes! Squiggle believes in honourable losses and shameful victories. If a team is expected to win by 10 goals but only prevails by 5, it will slide. What factors are considered? A team's rating is modified after each game by looking at: Scores , obviously.

Scoring Shots : a team is rated more highly if they record more scoring shots.

Venue : teams are expected to perform better at venues with which they are more familiar.

Team Selections : teams are expected to perform better when they select more highly-rated players.

Stage of Season : team ratings are more fluid in the early part of each season.

Past Games: each new game is combined with past results. Can a team beat the tipped result and still fall back on the chart? Yes! Two factors can cause unusual chart movement: Scoring Shots : where one team is much more accurate than the other. For example, in the opening match of 2018, Richmond won 17.19 (121) to Carlton 15.5 (95). Squiggle tipped a 29-point win, so normally the Tigers would slightly regress after winning by only 26 points. However, after factoring in the scoring shot disparity, Richmond's performance was rated more highly, and the Tigers moved positively on the chart.

Team Selections: where a team has significant ins/outs. For example, in the second match of 2018, Essendon defeated Adelaide by 12 points. This was very similar to Squiggle's expectation of a 7-point Essendon win, so normally neither team would move much. However, Adelaide selected a much weaker team than their previous game (the 2017 Grand Final); without this, Squiggle would have tipped Adelaide by 9 points. As a result, Essendon received less positive movement than they would have for the same scoreline against a normal-strength Adelaide. The Crows also saw negative movement, since the result was worse than what would have been expected from them given their previous rating. How is home ground advantage determined? As described in the Model section, home ground advantage in Squiggle 2.0 is generated from ground familiarity: How often the teams have played at the same ground and in the same state over the preceding 4 years (including the current season). What are Squiggle's weaknesses? Some quirks of Squiggle, which you may decide to compensate for as an intelligent human, include: Squiggle doesn't consider a team's level of motivation, which seems to be fairly significant. Squiggle assumes all teams are trying equally hard at all times.

Squiggle doesn't consider the impact of weather. It may reward a team for having a good defence when in reality what was making it hard to score was torrential rain. This doesn't seem to happen often enough to throw anything too far out, but does occur from time to time.

Squiggle doesn't place any special value on wins. That is, it doesn't see much difference between a 1-point victory and a 1-point loss. It may thus underestimate a team that does "just enough," or is especially good at holding on in tight contests. Likewise, it may overestimate a team that regularly gets itself into winning positions against good teams but lacks the ability to close the games out.

Squiggle gets excited about very low-scoring games. The maths mean that if a team is expected to keep its opposition to 80 points, and it actually keeps them to 40, this is considered twice as good, while keeping them to 20 points is considered four times as good, and keeping them to 10 points is eight times as good. Those are some big multipliers: to be 8 times better than a predicted 80 points in terms of Attack, a team would need to score 640 points (80 x 8), which is a lot more than the all-time AFL record. This means it's significantly easier for a team to move rapidly rightwards on a squiggle than upwards.

The underlying model generates some inflation over the course of a season, which causes Squiggle to rate teams about 5-10% higher by the end of the year compared to the start. This seems useful for predictive purposes, as it allows for more movement leading into finals. However, it means that a team that doesn't actually change strength at all will be shown to slightly improve its chart position over the course of the season. Why does the model use those values? All the numbers used by Squiggle are that way because they worked best (i.e. made the most accurate predictions) when every possible combination was tested with a simulator replaying the last few decades. How are the year's starting values calculated? 2015 starting positions are very similar to their end 2014 positions—the only difference is that 2013 data is no longer considered, so teams are modeled from the start of 2014 with each beginning on 50 ATTACK and 50 DEFENCE. This means late-season 2014 results weigh quite heavily. For example, Collingwood had an injury-plagued end to 2014, and so is rated very low. Adelaide and West Coast, by contrast, finished the year with several solid performances, and so begin the year higher than you might expect. What's with those crazy charts for the 1900s!? Football scores were a lot lower a century ago, especially in the very early years, when single-digit scorelines abounded. Squiggle is calibrated for modern football, and thinks a game in which one team is held to a single goal (or no goals!) signifies an unbelievably good defensive effort. This causes teams to go shooting off to the right quite often in charts from the 1890s, 1900s and 1910s. So it's not a particularly good visualization of the strength of any particular team in that era. But it is interesting in terms of how different the whole league looks: how low and flat it is compared to today. Similary, it can be interesting to look at where the mass of teams tends to sit in different decades; for example, how attacking the late 1980s was, with plenty of teams sitting high & centre/left compared to today.

The Model The foundation of the Squiggle model is the OFFDEF engine, which rates teams separately in terms of attack and defense. Each team is initially assigned a starting value of 50 for each. Scores are predicted for each match using the formula: PREDICTED SCORE = 85 * TEAM ATTACK ÷ OPPOSITION DEFENCE For example, in a match between a team with ATTACK 56 and an opposition with DEFENCE 50, the team is predicted to score: 85 * 56 ÷ 50 = 95 points. Predicted scores are compared to the actual scores, and ATTACK and DEFENCE adjusted accordingly. For example, if a team scored more highly than predicted, its ATTACK score needs to be increased, since Squiggle underrated it. Likewise, the opposition's DEFENCE score should decrease, since they failed to restrict the team as well as predicted. This is done by calculating what these scores would have to have been to predict the result perfectly, then constructing a weighted average of this along with all other results. At the start of a season, team starting points are calculated by doing the above for the previous season. For example, to calculate starting points for 2014, each team is assigned 50 to ATTACK and DEFENCE, then the 2013 season is played through. The units are completely arbitrary, and entirely due to the choice of 50 as a starting value for each team's ATTACK and DEFENCE. They have no meaning except when comparing teams to each other. Several other filters and algorithms are used to manipulate scores produced by the OFFDEF engine, including venue (for home ground advantage), round number, team selections, and scoring shots. Home Ground Advantage Teams are compared based on the number of times they've played at the venue and in the same state. Tip Probability When determining "probable wins" in the Season Predictor, an algorithm is used that reflects the actual accuracy of Squiggle tips vs real-life results. Three factors affect the likelihood of a tip being correct: Margin : The greater the predicted margin, the more likely the tip is to be correct.

Round Number : Games that occur later in the season are a little more likely to be tipped correctly.

Weeks Until Game: Games that are weeks or months in the future are a little less likely to be tipped correctly. Model Versions Squiggle v1 used the algorithm ISTATE-91:12, in which 12 points of Home Ground Advantage is awarded to the home team in interstate games only, and each new game forms 9% of the team's new rating (with previous games forming 91%). Follow this link for Squiggles generated under the v1 algorithm. Squiggle 2.0 made several changes in 2018: Greater sensitivity in early rounds to better captures the sometimes substantial form changes that occur over an off-season.

Factor in goalkicking accuracy, by discounting scores that resulted from unusually high accuracy (i.e. kicking many more goals than behinds), and padding scores that resulted from unusually low accuracy, since these tend to be non-reproducible.

Generate home ground advantage from a ground familiarity algorithm. Squiggle4 added Ins/Out awareness in mid-2018, so it can adjust predictions based on team selection. 2019 2018 2017 2016 2015 2014 2013 2012 2011 Squiggle v1 64.7% 68.6% 61.4% 69.1% 70.9% 72.0% 72.5% 77.8% 77.6% Squiggle 2.0* 65.2% 68.1% 64.7% 74.9% 73.8% 72.0% 73.0% 73.4% 77.0% Squiggle 4* 65.7% 72.5% 65.7% 73.4% 73.8% 73.4% 74.4% 73.9% 77.6% * Squiggle 2.0 before 2018 and Squiggle4 before mid-2018 are "retro-dictions"—made after the result. They shows how well the model fits historical data, rather than how its predictions performed in real time. To compare Squiggle's performance to other computer models, see the Squiggle Models Leaderboard.