

Selection Sunday looms. With just a few games left, teams and fan bases are shifting attention toward May Madness. And in our human search for clarity amidst the general fog of daily life, we want to know one thing: is my team in?

Being a statistically-oriented website, I won’t be able to give you a certain answer in most cases, but I can try and put some bounds around the range of possibilities. Because, let’s be honest, in most cases, the answer is “maybe, it depends.”

In the simplest example, a team that isn’t even sniffing the bubble can get hot, win 2 or 3 games in a conference tournament and there you go. For true bubble teams that end up losing their conference tournament, the answer can be more nuanced.

Maybe you are hoping a previous opponent keeps winning to pad your strength-of-schedule. Maybe you really need Towson to win their conference. Maybe it is just hoping a bubble rival loses a few.

Either way, this far out, there are a ton of factors that influence each team’s chances of snagging one of the coveted 17 slots. That is where a model would help. But sadly, a tournament model, I do not have…until now.

Turns out, it was not a huge lift because I have already built several of the components. So let’s dig into what pieces make up a tournament selection model.

Frankenstein’s Model

First, you need a way to play out the remainder of the season so that you can forecast what sort of resume each team will bring to the table on Selection Sunday.

Then you need a way to compare resumes, so that you can say which teams would get picked for an at-large slot.

Finally, you need a way to capture the meta data from you simulations. Without this, your model won’t spit out anything useful.

Test tube lacrosse

Obviously, we know each team’s schedule and we know what games they’ve already played. So it is fairly straightforward to play out the remaining games and see what sort of record you would expect each team to end up with.

I have started off using our ELO ratings to project a winner in each future game. This is the same method that I have always used to forecast the odds for each team in a game. The difference here is that instead of just assigning odds, I also generate a random number to provide some randomness. Since we are simulating the season, we can’t just say that an 80% favorite will win every time. We want them to win in 80% of the sims.

In each simulation, we need to run through the rest of the regular season and then figure out seedings for the conference tournaments. This is the tedious part because it required tournament design and tie-breaking procedures to be built into the model. Fortunately though, all the conferences boil down to one of two main tie-breaking procedures and one of two tournament models. Except the ACC, with their silly 5-team set up. (C’mon NC State, I have faith in you!)

Having done this, we can play out the conference tournaments and get each team’s final resume. Again, since we already have scripts to process game results and adjust ELO ratings, this piece was fairly straightforward.

Comparing Syracuse Orange to Cornell Big Red Apples

Once we have a list of game results for each team, the next task is to “select” 7 teams that did not win their conference tournament (I’m generally assuming that the ACC champ works like an AQ). This is easily the area with the most subjectivity involved.

For starters, the selection committee uses RPI heavily in their selection process. RPI is based 75% on the strength of your schedule, measured by your opponents’ winning percentage and adjusted based on their opponents records. Only a quarter of the rating is based on whether or not you actually won those games or not.

There is a criteria that uses quality wins vs bad losses to try and assign points based on your actual results, but even that system has arbitrary cut-offs where you get more points for a top-5 win vs a top-6 win.

I won’t go too deep down the “criticize the criteria” rabbit-hole right now though. For the purposes of our simulation, I wanted to mimic the criteria that the selection committee uses, so it’s largely RPI driven.

To that end, I calculate the RPI for each team at the end of every simulation. I then take the 7 teams with the highest projected RPI who did not win their conference title, and those become the at-larges.

The interesting thing about having the model for this, is that I can see what would happen under different selection criteria, which is fun (and probably the material for a post next week).

Producing something of value

The final thing to be built is a set of collection devices that allow me to capture the individual simulations and produce something useful.

The basic example is to record, for each team, the number of times they ended up selected vs the number of total simulations. Fairly easy, and you have the most important data point: odds of making the tournament.

But to stop there would be to waste 95% of the value of doing the simulations in the first place. So to start, I have collected several other interesting nuggets, which include:

For their remaining games, what is the percentage of time that a team is selected when they win that game vs lose that game?

For all the possible patterns of W/L for the rest of the year, what percentage of the time do they get selected?

What happens if they get to their conference finals and lose?

To illustrate the point, we can look at Villanova’s simulation results, since they are a team that could go in any number of directions.

As it stands, the model gives the Wildcats a 36.2% chance of being a part of the NCAA tournament. But as I mentioned, if we just stopped there, you’d say: “OK, so what?”

We can also break that down into why they get selected. Winning the Big East conference championship is the Cats best bet. They get in that way in 23.4% of the simulations, leaving only 12.8% of the simulations where they lose the conference but get an at-large bid.

In terms of their remaining regular season games, if they are going to drop one, they had better hope that it’s not against Georgetown. With a win in that game, their percentage chance of getting in jumps to 52.5% vs 25.2% with a loss. Of their remaining regular season games, that 27.3% gap is the largest difference between a win and a loss.

We can also look at what their situation might be in different scenarios regarding the Big East Tournament. Those three scenarios are:

Win the title: Obviously, they are in 100% of the time

Lose in the title game: they would still get an at-large 21.6% of the time

Lose in the semis: they would still get an at-large 18.9% of the time

That tells me a few things. First of all, Villanova fans shouldn’t put a ton of stock in making the conference tournament championship game. For Villanova, it’s pretty much win the Big East or else.

The last piece of the Villanova puzzle though, is who they get in the tournament. Here is the run-down of their odds of getting at-large depending on who they lose to:

Providence in the Finals: 0.0%

St. John’s in the Semis: 2.1%

St. John’s in the Finals: 7.7%

Georgetown in the Finals: 8.0%

Marquette in the Finals: 14.3%

Providence in the Semis: 18.8%

Denver in the Semis: 19.0%

Marquette in the Semis: 21.4%

Georgetown in the Semis: 27.4%

Denver in the Finals: 30.7%

Amazingly, they make it in more often when they lose in the semi-finals than when they lose in the finals. That is a great example of how the RPI-based system puts a lot of emphasis on your opponents. Playing Providence is clearly bad for their RPI, but if you lose to them in the Semis, it means you don’t have to play another low RPI team in the finals (unless it’s Denver, and then it’s a good thing).

As I look more at these numbers, the more it makes me question whether the RPI system is the one that we want to be the source of a team’s incentives. (Not suggesting that Villanova would lose on purpose of course, just that it’s a bummer that winning a game in their tournament could actually hurt their resume.)

It’s Aliiiive!!!

So there you have it. As far as I know, the first comprehensive college lacrosse season simulator in existence. (Please correct me if you’ve built one already.)

It was a fun exercise, but it also sheds some light on the downsides of an RPI based system. So much of the RPI calculation is based on schedule strength that it makes if difficult for non-power-conference teams to ever qualify for an at-large.

Here is an interesting nugget to leave you with: if we stick with the RPI based model (vs a Strength-of-Record based model), there are 7 teams for whom the final regular season games are 100% meaningless with respect to their odds of getting in. Under a different regime, those teams would have a lot to play for in the coming weeks. Is that really what we want?