Earlier this year, I finally got around to putting out live win probabilities for in-progress games. Last year, I had managed to build the after-the-fact visualizations for games already completed, but the leap this year was being able to do it for games as they are happening. It’s not a perfect system, largely because of issues getting/parsing data from the live stat feeds, but it’s mostly working.

One thing that has occurred to me (and several other keen observers) as I’ve watched some of the live game win probability visualizations is that the estimates seem very sclerotic. A team scores a goal and the win probability doesn’t budge. A team will be up 12 goals at halftime and the system gives them a 75% chance to win. There are good reasons for a model to be conservative, but I am thinking it might be time for an experiment to make it more sensitive.

A basic review of the model’s mechanics

To review, the model is fairly conservative because of a single central design decision: a linearly-decaying team strength factor. When a game starts, rather than giving each team a 50/50 shot to win, the model uses our ELO ratings to estimate the chance that each team will win. For example…

This means that a heavy favorite like Brown will start the game with an 80%+ chance in the win probability chart because the pre-game strength of the two teams accounts for 100% of the win probability calculation.

The other factor is time/score/plays. By looking at the time left in the game and the score, we can estimate, based on historical games, how likely each team is to win. There is also an adjustment for recent plays. A team down 1 goal is more likely to win the game if they just picked up a ground ball than if they just turned the ball over. But by and large, you can think of this factor as time and score.

At the very end of a game, the time/score factor makes up 100% of the calculation. When there is one minute left, we don’t really care who was favored heading into the game; we just care about time and score. But at the beginning of the game, this factor doesn’t mean much to us in terms of calculating win probability. If a heavy underdog scores the first goal one minute into the game, does that really cause us to update our expectations of the game that much? Not really, you’d still expect the heavy favorite to win.

The case for conservatism

These core mechanics aren’t going to change. I still think that decreasing the weight of the pre-game team strengths and gradually raising the weight of the time/score factor makes sense. The question is whether the weighting approach should change.

And that is where we have some room to experiment. The conservative method I’ve been using is to decay the team strength factor linearly. It is currently weighted 100% at the start, 75% after Q1, 50% after Q2…

And this means that if two teams were evenly matched heading in (i.e. 50% win probability), but one team was up 20 goals at halftime (clearly not evenly matched), they would still have only a 75% chance to win. 50% pre-game win probability, weighted at 50%, produces 25 percentage points of WP. The time/score factor would say that they are 99% likely to win, but at halftime, it’s also weighted 50%, producing 50 percentage points of WP. Together the 25 and 50 percentage points work out to the 75% WP at halftime.

I think we can agree that even if two teams are evenly matched, if one is up 20 goals at halftime, they are going to win the game.

Liberalising, to a degree

To make the model more responsive to time/score, I going to try and change out the linear decay model for a log-based one.

Under the new method, at halftime, the score/time factor would go from a 50% weighting to about 85%. In our example above, the team up 20 goals at halftime would go from a 75% WP to a 92.5% WP (50% x .15 + 99% x .85 = 92.5 percentage points of WP). By the end of Q3, the time/score factor would make up about 94% of the calculation.

The risk is small

The risk here is that I undersell a favorite’s chance to come back in a game. We will have a more sensitive win probability, but it’s possible that we start to give underdogs too much credit for early leads. I’m ok with that though.

The flip side is that there will be fewer games where the model boringly, steadily increases from 75% at haltime to 100% at the end of the game, with no regard for what happens on the field. I’m not saying that blowouts should have some farcical win probability that makes it look more competitive than it is; rather, a blowout should not show a 75% win probability at halftime. A blowout should be more like a 90%+ chance at half.

We shall see if it works out in practice, but in the competitive field of statistical lacrosse journalism, if you are not constantly innovating, no one will eat your lunch. I kid, I kid.