As the eighth annual MIT Sloan Sports Analytics Conference approaches this weekend, I find myself thinking more and more about the next frontier for quantitative analysis. Authorship certainly isn’t a problem, as there’s no lack of metric creation out in the wild. Data, once a problem outside the world of baseball, are widespread and rapidly expanding into spectra that wouldn’t have been remotely imaginable at the turn of the century. Awareness is steadily rising; the Phillies became the last Major League Baseball team to hire a stat guy, and 29 of 30 NBA teams were represented at last year’s Sloan conference. (The lone holdout, the Los Angeles Lakers, were shamed into attending this weekend’s conference.)

Understanding, though? That’s still hit or miss. There are really smart executives, coaches, and players who have either managed to neutralize the idea of analytics or flat-out rejected it. In many cases, I find the expert in question is really just misinterpreting a statistical concept or stretching it beyond its reasonable limits. In others, impossible straw men are drawn up that disqualify not only analytics from adding anything to the discussion, but also any sort of intelligent thought about how to win at your particular sport.

Which is to say that both the concept of analytics and the actual ideas behind analytics are probably being sold short by those holding out. The popular reasoning is that analytics should coexist with traditional measurements and concepts, and in many cases, that works perfectly. It’s also a catchall that doesn’t always fit. There are some situations where analytics are totally useless; I wouldn’t use a quantitative metric to figure out which left tackle I should draft, for one. There are others where analytics so thoroughly answer the question that the conventional wisdom is simply wrong.

Analytics, as seen by the uninitiated, often get summed up as alphabet-soup models that are as impossible to calculate as they are to understand. And yes, certainly, concepts like WAR and Corsi and DVOA are part of the analytics equation. But more often, analytics aren’t really all that advanced at all. It’s not about reducing sports to numbers; it’s about finding evidence. That seems obvious in 2014, but it’s not difficult to find a bevy of comments from this year, from successful people within the American sports community, which either misinterpret analytics or reject them in favor of an outdated or inaccurate worldview. Let’s run through them and see if there are any consistent mistakes being made, and what that can tell us about the steps the analytics community still has to make in communicating how these concepts work.

Ken Whisenhunt

Let’s start in Tennessee, where the always excellent Paul Kuharsky recently recapped a radio interview with new Titans coach Ken Whisenhunt. Kuharsky wondered whether Whisenhunt might be interested in or open to analytics by virtue of his civil engineering degree, but that wasn’t quite the case. Whisenhunt said he doesn’t really pay attention to analytics, “because I probably don’t understand it,” and then confirmed that with his subsequent statements.

This is the way to look at it from a perspective of play calling. I can’t tell you thousands and thousands of plays that you’ve gone in there and you’ve prepared to see a defense and you can run all the analytics that you want but there is no guarantee on third-and-1 in a critical situation in the game that they are going to play the defense they’ve shown 99 out of 100 times. It just doesn’t happen.

What Whisenhunt’s talking about here, I think, is that part of his job as a playcaller is to try to figure out what the other team is going to call and adapt accordingly. That’s game theory! It’s hard to think of a more analytics-friendly concept, and indeed, plenty of papers have been written on maximizing efficiency in playcalling in football by employing game theory, including this 2009 paper by Freakonomics author Steven Levitt and Ken Kovash, who pulled off one of the most impressive feats of this offseason: He managed to successfully remain employed by the Cleveland Browns front office. In this case, analytics — perhaps not the analytics Whisenhunt is imagining — agree with Whisenhunt’s concept wholeheartedly.

That said, I’m not sure his explanation makes a lot of sense. It might be taken to the extreme, but if you’re a playcaller and you see a team line up in a particular defensive front on third-and-1 99 times out of 100, aren’t you going to assume they will line up in that front when you suit up for the 101st time? Think about it like a punt coverage: You never see the punting team, say, line up with five guys on the line because it thinks this might be the one exception where the opposition doesn’t line up in a traditional punt-return formation.

There is always the human element in there, I think. Listen, you’re right, I’m an engineer. I understand the trends, I understand the probabilities, I understand all that. But if you get so wrapped up in analytics sometimes, you lose a feel for the game. And to me, there is an emotional side of the game and there is also a feel for the game. When you see a guy like [Frank] Wycheck make a one-handed catch in the back of the end zone with the guy draped all over him, how do you put an analytic on that?

As an aside: I always love when people use “to me” at the beginning of the sentence. It’s supposed to imply this is some closely held point that reveals something about the person talking, but it’s almost always some widely held sentiment that seems obvious. Everyone agrees there’s an emotional side of the game and a feel for the game, right?

Here, though, Whisenhunt holds analytics to an impossible, arbitrary standard. (He also uses the word in a sentence the way your mom would talk about somebody “doing a rap” or “writing a blog.”) Of course there’s no metric that implies or encapsulates Frank Wycheck’s spectacular one-handed catches in the back of the end zone. We could invent one, certainly, but I doubt that Tight End One-Handed Catches (TEOC) would catch on or be of much use.

Put Whisenhunt’s standard in a different context and you can see why it’s silly. Imagine, for a moment, he was making the same argument against the idea of reducing players to X’s and O’s and bothering to come up with a scheme or play design. There’s no play design in history that’s specifically going to call for the quarterback to throw a ball out of Wycheck’s range and have him catch it with one hand, right? You might know Wycheck is good in the red zone, or that your tight end is your safest target against soft zones from linebackers, and you might draw up a play where Wycheck is your first target, but you would never, as a playcaller or an offensive mind, draw up a specific play where Wycheck was supposed to catch the ball in the back of the end zone with one hand. That doesn’t reduce play design or offensive scheming into irrelevance. And, likewise, you might use analytics to conclude that Wycheck has been wildly successful in the red zone during his career, or that passes to your tight end in the red zone are less likely to be intercepted than any other target, and that might encourage you to throw the ball to Wycheck in the end zone. Analytics, just like play calling or proper play design, are designed to help put you in the best situation possible and make it easiest for you to succeed. It creates the best process, and when the outcome turns out to be a one-handed catch, that is what’s called a bonus.

Kevin Mawae

#NFLCombine #s can't measure heart, commitment, integrity, attitude, character, determination or more; many guys w great #s never pan out — Kevin Mawae (@KevinMawae) February 23, 2014

Kevin Mawae, one of the best centers in the history of modern football, rehashes a classic argument against the combine, which yields some of the oldest analytics in the book. (Like passer rating, the metrics produced by the combine have been around for so long that the league has accepted them, even if they’re not of much use.) To some extent, I agree with Mawae: The combine is of limited utility, and has to be taken in context with a player’s college performance, his conduct and knowledge expressed during team interviews, and his medical condition. And, yes, doctors actually do measure your heart at the combine.

You hear these arguments in favor of intangibles as arguments against analytics all the time, and they don’t really fly. I don’t think anybody worth their salt who puts even a tiny bit of stock in numbers doubts that the list of qualities Mawae posted matter. A player’s constitution can help get the most out of what he has, even if he lacks the physical characteristics associated with truly great players.

To suggest those intangible attributes are what determines who plays well at the next level is incomplete and likely unfair. Just as there are players with great athletic ability who fail to apply themselves and wash out of the NFL, there are plenty of guys who give every last ounce of heart and effort they have to the NFL and fail to succeed because they lack the ability or physicality to play at the next level.

If it were really all about heart, wouldn’t the NFL consist almost entirely of college walk-ons who suited up for the love of competition? Wouldn’t Russell Wilson and Michael Jordan, athletes with incredible heart and drive, have succeeded in baseball? Wouldn’t the many ex-NFL players who have become general managers know to look past the fool’s errand of athleticism to go for a teamful of gritty, undersize tough guys? It’s an incredible coincidence, then, that the guys who have the heart, commitment, and integrity to succeed at the professional level just happen to be giants with incredible quick-twitch skills in Division I colleges.

Analytics like the ones produced by the combine probably aren’t going to quantify heart or determination. That’s fine. There’s nothing wrong with making those things part of the discussion in terms of evaluating a player. What analytics might be able to do, though, is use history to figure out the most meaningful and telling characteristics among the things you can quantify, and how those factors interact with the things that can’t be calculated. It’s all part of the puzzle.

Tony LaRussa

Legendary Athletics and Cardinals manager Tony LaRussa thinks newfangled metrics are keeping Jeff Bagwell out of the Hall of Fame:

Otherwise, Jack Morris would be in the Hall of Fame … the new metrics have a real important place, just don’t exaggerate them, and I think they get exaggerated at times. Like with Jack Morris, and maybe Bagwell.

What LaRussa is saying, of course, is that you need to keep something like WAR or ERA+ on equal footing with RBIs or pitcher wins. Which is ridiculous. There’s no newly introduced advanced metric keeping Bagwell out of the Hall of Fame, nor is the electorate that hasn’t voted for him particularly dependent upon new advanced metrics. (Some are, of course.) The popular JAWS system developed by Jay Jaffe paints Bagwell as the sixth-best first baseman in league history and ahead of the typical Hall of Fame candidate in every way. OPS+ has him as the 36th-best hitter in baseball history, and he’s 37th in positional bWAR. The only reason he isn’t in the Hall of Fame is because voters have arbitrarily decided that anybody who hit home runs in the 1990s was on steroids.

Morris is kept out, meanwhile, because the new metrics have revealed for a decade-plus now that the arbitrary cases once made for Morris don’t really fly, and that he was just about a league-average pitcher. The “pitching to the score” argument has been refuted repeatedly, not by some advanced metric, but by simply looking back at Morris’s career and pointing out that he didn’t exhibit any ability to do so. The metrics that adjust Morris’s career performance for his run support and the context in which he played, to be clear, are miles better than the traditional methods of evaluating a player’s performance, and every front office in baseball would tell you so. The new metrics are not being improperly exaggerated here. The old ones are.

Ron Washington

Ron Washington was one of the featured characters in Moneyball, remember? So it hurts the most when he says things like this about the sabermetric opposition to the sacrifice bunt:

I think if they try to do that, they’re going to be telling me how to [bleep] manage. That’s the way I answer that [bleep] question. They can take the analytics on that and shove it up their [bleep][bleep].

Wow! One can envision Washington, abandoned by his peers, grumbling as he slowly retreats backward against the tide. At last, he establishes a final beachhead from which to keep the game he loves from being overtaken — overtaken by people examining history to figure out which methodologies will make it easiest to win that game. He goes on:

Mike Scioscia dropped 56 sacrifice bunts on his club, the most in the league, and he’s a genius. But Ron Washington dropped 53 and he’s bunting too much? You can take that analytics and shove it. I do it when I feel it’s necessary, not when the analytics feel it’s necessary, not when you guys feel it’s necessary, and not when somebody else feels it’s necessary. It’s when Ron Washington feels it’s necessary. Bottom line. … The percentages for me in that situation go up by [some of his lesser hitters] squaring and bunting it rather than me allowing them to swing.

I’m not sure why Washington thinks Scioscia has been deified for his usage of the sacrifice bunt. It’s certainly not my place to speak for baseball sabermetricians, but my impression is that they would also frown upon Scioscia’s usage of the sacrifice bunt, too.

Jason Collette covered Washington’s comment and what sacrificing actually accomplished for the Rangers last year in a FanGraphs piece published Wednesday. The answer is, well, not much. The Rangers actually sacrificed more frequently than the Angels, 45 to 37, with 19 of those bunts coming with a runner on first and nobody out. We can figure out the run expectancy for this simple situation by — and this is going to really piss Washington off — simply going back and calculating how many runs each team scored when they had a runner on first and nobody out, and how that changed when teams had a runner on second and one out. Baseball Prospectus has a report that does just that, and it notes that sacrifice bunting reduced a team’s run expectancy for that inning from .83 runs to .64 runs in 2013. The same is true of most previous years.

When Washington talks about playing the percentages, he’s simply wrong. As Collette notes, The Book, authored by sabermetrician Tom Tango and others, goes into lengthy detail about the percentages and when it makes sense to execute a sacrifice bunt. Tango uses history — the same history Washington is attempting to make sense of and apply by way of memory — to find that sacrifice bunts were grossly overused and rarely made sense. This is not a question of analytics; it’s a question of whether one human’s brain is more effective than a computer at memorizing hundreds of thousands of outcomes across several decades, and the answer should be obvious.

Washington isn’t being old-school or traditional with his comments. He’s being obstinate and wasteful. You can understand why he would want to manage a team based upon the principles of the baseball he has seen coming up into the game, and there are ways he can make an impact on his team that can’t be measured by sabermetrics. But the sacrifice bunt is a place where there is almost no space for discussion. Washington is actively making his team worse, and even worse, he’s indignant about doing so. Can you imagine a CEO running a business this way? You can? Shit.

Throughout these arguments against analytics and quantitative analysis, we see some consistent focuses. There’s an emphasis on older methodologies, even when they’ve been surpassed by options whose superiority is easily provable. There is the misnomer that statistics need to encapsulate everything to justify their usage, a baseline that doesn’t apply to any traditional method of analysis. And there’s a characterizing of concepts that might otherwise be too difficult to understand as a waste of time, which is unfortunate.

Because of that, I’m really inclined to think the most important thing stat geeks can do in 2014 is not develop new statistics, but do a better job of explaining the metrics that already exist. The best organizations — some of which have employed or do employ the players and coaches I referenced above — don’t necessarily have the best methodologies or the most advanced quantitative analysis, although some do. Instead, they make the most of the metrics they do have by communicating what they do know throughout the organization and implementing it in meaningful ways. It’s the Pirates and their dramatic defensive shifts, a move that unquestionably pushed them into the playoffs a year ago. Or Sam Presti and Oklahoma City’s philosophy of constantly questioning what they think they know. As Sloan approaches its 10th birthday, plenty of owners and general managers will happily stop by and announce they’re interested in analytics. For things to keep changing and for evidence-driven analysis to improve teams’ chances of winning, though, the people talking and writing about those metrics will need to do a better job of communicating them to the nonbelievers. There’s still a lot to learn. There’s also already a lot to say.