Note: Baseball Prospectus has removed the leaderboards mentioned in this article. Thank you for your interest in our work and for your patience as we attempt to resolve this issue.

Last year, the folks at MLB Advanced Media started publishing what is commonly described as “exit velocity”: the pace at which the baseball is traveling off the bat of the hitter, as measured by the new Statcast system.

As a statistic, exit velocity is attractive for several reasons. For one thing, it is new and fresh, and that’s always exciting. It also makes analysts feel like they are traveling inside the hitting process, and getting a more fundamental look at a hitter or pitcher’s ability to control the results of balls in play.

However, we’ve seen many people take the raw average of a player’s exit velocities and assume it to be a meaningful indication, in and of itself, of pitcher or batter productivity. This is not entirely wrong: Raw exit velocity can correlate reasonably well with a batter’s performance.

But this use of raw averages also creates some problems. First, if you use exit velocity as a proxy of player ability, then you must also accept that one player’s exit velocity is a function of his opponents, be they a batter or pitcher. Put more bluntly, a player’s average exit velocity is biased by the schedule of the player’s team.

Second, and much more importantly, we have concluded Statcast exit velocity readings, as currently published, are themselves biased by the ballpark in which the event occurs. This goes beyond mere differences in temperature and park scoring tendencies. In fact, it appears that the same player generating the same hit will have its velocity rated differently from stadium to stadium, even if you control for other confounding factors.

Third, and this admittedly is a technical point, raw averages are virtually always an inaccurate estimate of the player’s probable contribution to each play. This principle, which follows from the James-Stein estimator, underlies our shift to mixed modeling for all of our new metrics at Baseball Prospectus. The most likely contribution of each player to his average exit velocity is narrower than it appears. By using a mixed model, we shrink these raw averages to the player’s most likely contribution, at the same time we control for these other factors.

Our new Statcast leaderboards attempt to address these biases in a variety of ways. Our “adjusted exit velocity” metric uses a linear mixed model to control for opposing pitcher / batter and stadium, while also incorporating shrinkage principles to make the new averages a better fit of player performance overall. There are separate leaderboards for pitchers and batters, and the relevant column in both to adjusted exit velocity is Adj_Exit_Vel. We advise checking these leaderboards before making any sweeping claims about the significance of a player’s associated exit velocities.

In terms of ballpark bias, how much of an effect can a particular park have? As it turns out, a fair amount. Here is the table of generated intercepts for each stadium and its effect on average, adjusted exit velocity during the 2015 season:

Stadium Park Effect ARI 1.17 ATL 0.02 BAL 0.98 BOS -0.02 CHC 0.18 CIN -0.96 CLE 0.32 COL -0.37 CWS -0.21 DET 0.85 HOU -0.96 KC 0.46 LAA -0.19 LAD 0.26 MIA 0.09 MIL -0.02 MIN 0.21 NYM -0.94 NYY -0.14 OAK -0.02 PHI -0.02 PIT 0.11 SD -0.45 SEA 0.05 SF 0.39 STL -0.60 TB 0.10 TEX -0.04 TOR -0.05 WSH -0.20

As you can see, this amounts to a difference of over 2 mph from the fastest to the slowest stadium reading for what should be essentially the same hit. This is less significant for batters, but for pitchers—who have an adjusted exit velocity range of only about 3.5 mph total—we are talking about a potentially significant impact.

The existence of this stadium bias is not really that surprising: While each team does regularly calibrate its Trackman radar systems for consistency, a number of factors conspire against them. First, ballparks have different geometries, which means the radar is not placed in exactly the same place in every city. In other words, exit velocity is probably not measured at precisely the same “point” at each ballpark. Moreover, the equipment has inherent variability of its own, in a manner that is unique to each ballpark.[i] A radar installed in Philadelphia will not lose calibration in the same way, nor at the same time, as one installed in Los Angeles. This is due both to the environment and park-unique installation.

How do we know these differences are a function of equipment bias, rather than just park-scoring tendencies? Well first, any experienced eye can tell that the intercepts above do not correlate with known park factors. Cincinnati is not a place where hits go to die, and San Francisco is not a ballpark known for inflating offense. Naturally, though, we also tested this statistically. To control for stadium run-scoring, and to make sure this wasn’t just some internal BP thing, we tested our hypothesis using the pitcher park factors calculated for the 2015 season by our friends at Baseball Reference. The result? Even controlling for temperature and inherent park scoring factor, and even shrinking each stadium factor toward the grand mean, the further effect of having exit velocity measured at different stadiums was still statistically significant (p<.05). Our leaderboards therefore control for this important bias.

Finally, our leaderboards also go one step further, and translate exit velocity and launch angle into estimated runs generated/prevented by the player. These run estimates account for both adjusted exit velocity and launch angle, as it is the combination that really matters. To our knowledge, no other public leaderboard does this. We’ve also made the descriptions a bit less cryptic. On our Statcast leaderboards, these calculations are now described as follows:

· Pred_Runs: the number of runs we would expect the batter / pitcher to generate / prevent based on the adjusted exit velocity and launch angle of the ball off the bat.

· Pred_Runs_Rate: this is the column we sort by, and it tells you the average effect, per batted ball, that the player’s presence has on run-scoring, per their adjusted exit velocity and launch angles.

· Act_Runs: the raw number of runs generated while the batter / pitcher was involved so far this season.

· Act_Runs_Rate: the raw average run effect, per batted ball, while the batter / pitcher was involved so far this season.

· Act – Pred: the differential between the outcome of batted balls with the batter / pitcher involved and what our models would have predicted. The differential suggests how lucky / unlucky a player has been so far this year. A positive value means that the batter/pitcher was predicted to create/allow fewer runs than they actually did.

· BIP+HR : the number of batted balls at issue, comprising balls in play (non-HR, fair balls) and home runs.

We hope you find these new leaderboards useful, and also want to thank MLBAM for making this exit velocity data publicly available.

Bibliography

Douglas Bates, Martin Maechler, Ben Bolker, Steve Walker (2015). Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software, 67(1),

1-48. doi:10.18637/jss.v067.i01.

R Core Team (2016). R: A language and environment for statistical computing. Version 3.3.0. R Foundation for Statistical Computing, Vienna, Austria. URL

https://www.R-project.org/.

Wood, S.N. (2006) Generalized Additive Models: An Introduction with R. Chapman and Hall/CRC.

All Statcast data courtesy of Baseball Savant, https://baseballsavant.mlb.com/

Models:

For those who enjoy the technical aspects, here are the details on our models.

Our adjusted exit velocity model is as follows:

Our linear weights model incorporating measured hit speed and launch angle is as follows:

Our raw run rate model is as follows:

Finally, our expected run rate model is as follows:

All models are fit with the lme4 package in the R computing environment, except for the gam, which is fit with the mgcv package.

All variables were tested on the 2015 season. Each variable was tested with a likelihood ratio test as compared to a reduced model, and then was further checked with 10-fold cross-validation, from which we took the mean absolute error over all runs. If the variable passed at least the 10-fold-cross-validation test in terms of reducing error, it was added; otherwise, it was rejected.



[i] Most of this can be attributed to the radar's limited ability to eliminate “clutter,” or non-baseball objects like the ground, the umpire, a passing bird, or a stray D-cell battery. Artifacts like this can slow or inhibit the system's ability to identify the moment when bat hits ball.