In the past decade or so, shifts in the way front offices have approached lineups have dramatically increased the number of platoons and utility players in the MLB.

Events such as the NFBC and TGFBI have prompted a dramatic move away from daily rotisserie setups where these changes could most benefit fantasy players. Rather than seeing platoons as an opportunity to stream more small-side platoon players and get a leg up, they’ve been a pox on the playing time of lefty mashers. And if Joc Pederson isn’t going to be in the Dodgers’ lineup all week, that’s going to keep him out of plenty of weekly fantasy lineups as well.

To me, that’s a shame. As someone who doesn’t make enough money to feel OK lighting an NFBC entry on fire by taking Javier Baez seventh overall, I typically play far more leagues with daily lineups than many of the high-profile industry players. So for years, many of the rankings available to me have been unfairly biased against the part-time baseball destroyers of the world. And a lot of the discussion about where to take them in daily setups ends at “take them a round or two earlier.”

As it turns out, the problems with evaluating platoon players are just a small part of the systemic issue with playing time that current evaluatory systems have. Whether built on z-scores and category points gained systems, almost every existing value calculator is going to heavily and unreasonably biased toward players projected for over 150 games. And that means anyone projected below that for any reason — platoons, injury, rest or otherwise — is at risk of being undervalued.

And fixing it will require us to find a way to make games played one of the core inputs into our new formula. And the results when we can help you get more value out of your draft even in weekly leagues.

What do value systems get wrong?

When I broke down the problems with the ESPN Player Rater, I took issue with the pool of players it used to find “league average.” Because it used the entire MLB, minor leaguers and part-timers weighed down its averages, warped the scarcity of steals and saves, and generally made its results completely bogus. By trying to build one formula for all 5×5 leagues, ESPN failed to make one that works for any.

Most z-score systems, such as the one that powers the FanGraphs Auction Calculator, fix this by including only the players who should start based on the settings you give it. This assumes that we’ll be putting the best players into our lineups, which is admittedly a stretch. But it does a pretty good job of ranking players for most cases, especially in weekly leagues.

But where all these systems tend to fail is in terms of differences in playing time. I’ve previously discussed Yordan Alvarez and Leury Garcia both scoring an identical 4.77 on the ESPN Player Rater last year. We can make a similar comparison between Alvarez and Kole Calhoun using the FanGraphs Auction Calculator in 12-team Yahoo standard.

Yahoo Standard 5×5 12-team Auction Values, 2019 (z-scores)

And for those of you who prefer Razzball’s approach, which uses standings points gained instead of z-scores, we can create yet another near-identical comparison. Just like Leury Garcia, Kole Calhoun makes for a perfect foil because of his huge games played total:

Yahoo Standard 5×5 12-team Auction Values, 2019 (Razzball)

Yes, Razzball’s system is linear. It should be, to an extent — when you’re drafting a team, two different paths to the same endpoint should cost the same amount, so assigning a constant dollar amount for an additional home run makes sense. Both of these systems do that. But they break because playing time isn’t directly controlled, allowing it to take over.

In weekly leagues, this is easier to ignore. If seemingly half of the Dodgers lineup will platoon occasionally, the differences in playing time between Max Muncy and Jose Abreu will matter when I’m drafting someone to fill first base. But in daily leagues, I’ll usually get to fill in bench players when Muncy sits. And that doesn’t just mean that z-scores systems are getting Muncy’s value wrong: Razzball’s CPG method could also be giving starters the credit for differences in category standings when in-season management may actually be the difference.

And if our competition is relying on broken stats, there’s a leg up to be gained by fixing them.

Why don’t the current fixes work?

This issue with playing time is no great secret. By avoiding it, both of these systems (plus ESPN) make it everything. But the most common fix, to just find value on a per-game basis, still misses the mark.

The Steamer600 is probably the best-known version if this: It takes Steamer’s projection for a player’s season and adjusts the playing time to exactly 600 plate appearances. The benefits of this are obvious: We get to compare how effective a player is per at-bat, but on a scale that we can understand. Just giving out per at-bat results isn’t particularly helpful. Just ask yourself, is .04 home runs per plate appearance good? How do you know? Adjusting it to a much larger scale is the only reasonable answer if you’re going that route.

But we don’t start players for one at-bat. We usually start them either for a game or a week at a time. Over the course of a game, players will get different numbers of at-bats depending on which team they play for, which lineup spot they occupy, where the game is being played, and several other factors. Adam Eaton is going to get more at-bats toward the top of a strong Nationals lineup than Amed Rosario will if he takes the eighth spot in the Mets lineup but it does let you know whom to start on a daily basis. And that’s why Razzball calculates dollars per game, $/ G , which I strongly prefer to what the Steamer600 gives you. Its usefulness caps out as whom to start on a daily basis, but that’s still applicable pretty often.

But that still doesn’t solve our earlier issue. Abreu will probably start more games than Muncy in the average season, or even in the average week. It’s easy to use dollars per game (or something similar to it) to decide who to start against roughly similar opponents. But you have to draft players first, and they certainly don’t help with that — they say nothing about how many days you’ll get out of Abreu, or anyone else.

And further, they aren’t nearly as helpful with setting weekly lineups. Four games of a $15/ G player and six games of a $10/ G player are not both going to be worth $60 because these dollars aren’t total production — they’re production compared to a replacement-level player for one game. A $0/ G player is still producing something. A hole in your lineup two games a week is not.

Taking this comparison further actually helps to illuminate what I mean when I say that traditional systems “break” when playing time is inconsistent. By only measuring the season totals, Razzball allows 87 games of Alvarez’ $32.60/ G skills to come out level with 158 games of Ahmed’s -$2.20/ G skills. And the only way for that to end up true is to assume that fantasy players are never changing their lineups and to massively penalize hitters for every game they don’t play. And this, of course, makes no sense whatsoever.

It would be easy to write this off as a system breaking only at the edges. Very, very few players are called up and make a massive half-season splash, so why tear down everything for them? But that would ignore that many, many other ways situations like these happen. Half a season of playing time is a distinct possibility for Aaron Judge, Giancarlo Stanton, or Adalberto Mondesi.

And in less extreme cases, it becomes clear we are massively discounting players for missing just an additional seven games or so for a projected 10-day IL stint. In fact, when calculating values with z-scores the traditional way, a 10% drop in Christian Yelich‘s projected playing time from 148 games to roughly 133 sinks his value roughly from $43 to $34 — 21% drop! Every 1% decline in playing time pulled down projected dollars by more than 2% for him. And for players with lower projected values, those differences were even more drastic — a 1% decline in playing time dropped Manny Machado‘s draft value by more than 3% and Amed Rosario‘s by more than 14%.

So, ignoring playing time can accidentally make it far more important than it should be. And removing it entirely severely limits the usefulness of whatever number we find, especially for drafts.

How do players actually help my team?

In order to come out with a solution, I’m first going to take a look at how I might actually manage a team throughout a normal week.

I normally start Kyle Schwarber in one of my outfield spots, and when he plays, he provides excellent production. On Thursday, the Cubs are off, and I leave it empty because I don’t want to burn one of my 162 allowed starts in Schwarber’s outfield spot. But on Saturday, the Cubs sit him against a lefty, and I swap Mark Canha into my lineup because I like the matchup and need to keep pace. This roughly repeats every week, I get 135 starts out of Schwarber, and Canha plays the remaining 27 games I have available for that outfield spot.

Our goal should be to add up how much better or worse than average my lineup will be every single day over the course of a season. Then, we can assign Schwarber and his bench replacement their deserved shares.

So the solution is reasonably simple:

Find the z-scores (or CPG) for projected per-game contributions. Weight those contributions by games played. Add in an expected bench contribution, also adjusted to bench games played. Add a positional adjustment to push the last drafted player up to zero.

What we’ll get back is Total Daily VORP — a measure of how much a player contributes to a daily lineup, with all of the players we expect to be our “starters” having positive scores. From there, getting dollars is the same as for any other method — set aside $1 for everyone drafted, then divide up the remaining money based on value.

Before I talk results, I’m going to add a few more technical specs here. If you don’t want to read them, skip to the next section.

There are some issues with this approach, but compared to what they replace, they are fairly minor. This method assumes that, for any drafted player, we don’t lose any games played — we assume we’re still filling our lineup out despite 106 projected games for Kyle Tucker. I don’t think this is entirely realistic, but it can be modeled in by pushing down the bench production. I went with a number about one z-score below the last starter drafted to capture both how valuable bench players are pre-game and the risk of taking a zero or two at the end of the year.

If there’s one piece of this project I regret, it’s not getting a more stable result for this number — when I tried to sum the average of bench hitters and plug this into my result, that changed which players made the bench. That, in turn, changed the bench value, and I found myself chasing my tail, mathematically speaking. In the end, I took an average of these bench values and stuck with it. In leagues without a cap on starts per-position or with very thin benches, bench value probably needs to be lower, and finding more accurate results is a problem that I plan on tackling in the future. In-season management will likely have a huge impact on this number anyway — if you’re playing waivers well, your bench will be better and will play more, affecting bench replacement.

I’ve also built out “traditional” z-score values to compare against, and my results vary from FanGraphs’ slightly. Much of this comes from positional adjustments — I typically disagree with the normal practice of giving different adjustments to each position. I still base positional adjustment to the last starter drafted, which is the norm. But if both Miguel Andujar and Nomar Mazara are filling utility positions, especially in two-utility leagues, do they really need to be treated differently? Just because the last utility player drafted happened to be an outfielder doesn’t make outfield more scarce.

By setting adjustments separately, we create chaos for positions with fewer eligible players. We would punish Anthony Rendon‘s value with a lower adjustment just because none of the last 12 utility players are third basemen. It only matters that the player in the last 3B slot is better than the last utility player is drafted. This isn’t the case for 2B and catcher, so they get larger, separate adjustments, but 1B, SS, 3B, and OF get the same amount. I take this approach for both sets of calculations, and that’s what you see compared below — same underlying data, comparable methods, and the same total number of dollars dispersed. If you do try to compare against FanGraphs, make sure to use ATC and set up your league to Yahoo standard 12-team settings for 5×5 — things should still be pretty close.

Whose value changes the most?

So, without further ado, below are the top 25 value risers using TDV instead of a traditional value formula, excluding catchers. If you’re not as familiar with where auction values land on a snake draft board, I recommend checking out Alex Chamberlain’s primer on converting between the two.

Biggest Value Risers, 12-team Yahoo Standard 5×5

If it isn’t obvious, the big gainers in this list are players expected to spend time on the injured list. Judge and Stanton are risky picks, and I’m not advocating spending a fourth-round pick on either. But the first and easiest takeaway here is that we’re downgrading because of injury far too much as is, especially in leagues with more than one IL spot like Yahoo. If you know how long a player will miss and that they’ll come back full-strength, you really shouldn’t be downgrading them by much more than the percentage of the year they’re missing. That’s a lot of if, though.

I personally expected this list to be full of platoon options, but it seems like they’re getting drowned out by the sheer number of players mispriced for other reasons. They’re still there (Pederson and Luke Voit say hello), but the bulk of the list is old guys likely to get rest days from time to time. They tend to play first base and tend not to steal bases, which also explains why you don’t see many speed merchants here. But rest assured, Mondesi gets a bump in this system too given his projection to miss some time. So too does Yelich— more on him later.

That said, with so many players rising is value, the extra dollars have to come from somewhere.

Biggest Value Risers, 12-team Yahoo Standard 5×5

Before you jump to a wildly incorrect conclusion because Jonathan Villar‘s value is lower than you want it to be, let’s take a minute to talk about why it is that way.

There are, broadly, two types of players who get to 150 games. Most of them are going to be young players with physical tools that make them valuable both in the field and at the plate. Paul DeJong fits this category, as do Bryan Reynolds and Victor Robles. The other end of the spectrum are players with large contracts or “best player on a bad team” types — the Padres’ front office isn’t in any hurry to get Eric Hosmer out of their lineup.

Generally, neither are going to include players with chronic injuries. And many of the best defenders are going to have excellent speed. In other words, it’s not that this formula was designed by me to push down steals producers. It’s just health is required both for playing every day and for stealing bases.

But, being concerned that there could actually be an effect on steals, I took a look at Lorenzo Cain. Despite being the obvious playing time outlier on this list, his 139 projected games are average for 12-team leagues, making him a great test case.

Lorenzo Cain Projected Categorical Values, Traditional vs TDV I’m purposely leaving off all of the conversions that get us to dollars to focus on category outputs. If you’re not familiar with the under-the-hood z-scores components, these numbers reflect how many standard deviations from average Lorenzo Cain‘s outputs would be compared to players who should be starting in Yahoo 12-team. The clear takeaway is that Cain is losing his value and falling down my board because his per-game counting stat outputs, especially RBI, are untenably poor. Steals deflation isn’t the culprit. His steals value actually rises in my TDV calculations! It just correctly penalizes players who don’t pick up counting stats, and so Cain falls.

If you’d like to draw your own conclusions or just see everyone in one place, you can compare the dollar outputs using the Tableau below. The dotted line represents equal value in both systems — players above it gain value using TDV, while those below are lose value.

What else can we learn?

Let’s start with the most interesting tidbit: How good was Yordan Alvarez last year?

Well, assuming that a “bench” player filled out your lineup for the first half of the season, here’s the comparison in Alvarez’ value for last year:

2019 Yordan Alvarez Values Earned

Alvarez’ half-season will get picked apart for all sorts of reasons, but traditional z-scores sell it catastrophically short. If you set the bench player to $0 per game instead of my default of about -$4 , he jumps all the way up to $37.88! Considering he was markedly better than any of Mike Trout, Ronald Acuña Jr., or Yelich per-game last year, that number seems reasonable.

Yelich in particular also deserves a closer look under this system. His perceived injury risk is scaring away a few potential buyers, so let’s see what happens to his value at different playing time percentiles.

Christian Yelich Values by Playing Time Percentile

You can read these as how much value you would recoup from Yelich if he were to get injured and play only a certain percentage of the season. And the differences are stark. Yelich will clearly provide you value when he plays, and unless you play in a league without IL spots, he’s almost guaranteed to be worth a huge amount. The TDV calculation follows more of what we should expect — great production in a limited sample should still return positive value!

The takeaway should be wrong the traditional calculation gets it. Not that Yelich could still be really, really good if he misses 15 games.

How can we use this?

It’s clear that current systems make projected playing time one of the biggest contributors to the values they spit out. But to be absolutely crystal clear: this is not an endorsement of drafting an entire team of players projected for low playing time. Just like you can’t draft an entire team of sluggers with low batting averages just because they’re the most valuable players available, you shouldn’t be drafting Nelson Cruz, Aaron Judge and Giancarlo Stanton at their calculated value or drafted position. The goal is to get a discount on a few big contributors, while still filling out your lineup as often as possible.

As a result, I am targeting high-volume players to fill my bench and guarantee these conclusions can help me. As important as it is to fill your lineup with high-productivity players, taking “zeros” should be avoided as much as possible.

Why? Because I found that the per-game value of a “zero” was about -$68/ G . Yes, an 0-for-4 game at the plate is worse than nothing because it drives down averages. But on average, there are no players getting regular playing time who are, on average, worse than a blank. This system works on the assumption that you’ll be managing your team properly and making sure your shortstops combine for 162 games.

In practice, I’m drafting players with starting jobs to fill my bench. This means taking few fliers on prospects even if some of those players, such as Kyle Tucker, register as great values by this system. I’m choosing to roster the Miguel Sanos of the world instead — my upside plays are coming earlier rather than later. But I’m not afraid to drop my bench players to take a different shot, especially early in the year — if I can catch the next Ketel Marte, I’m not going to hold onto Adam Eaton to do so.

Larger than individual drafts, the focus needs to be on how to smooth the rough edges on what I’ve built so far. More detailed work on how playing time affects the total number of games your fantasy team will play would help me set a more accurate “bench” adjustment rather than the estimate I’m using now. And I could certainly automate more of how I’m developing this to more easily move between league formats.

But personally, I’m more concerned about altering how we discuss player performance. It’s nowhere near enough to call someone a 20-20 player, and shrinking the game to two categories is a huge factoring in why speed gets pushed up in industry leagues.

We’re better than that. We’re going to put in the work to build better predictive statistics like barrel rate and CSW rate, to better understand how injury recovery works, and to monitor which prospects can contribute when called up. So we deserve better descriptive tools to evaluate exactly how good we think our guys can be.

Photo by Cody Glenn & Jerome Lynch/Icon Sportswire | Adapted by Zach Ennis (@zachennis on Twitter and Instagram)