This post by J.C. Bradbury collects some comments on the notion of sabermetric groupthink: the idea that those of a sabermetric bent, like myself, tend to unjustly treat those who don't adhere strictly to sabermetric orthodoxy as morons. I'll leave aside the question of whether a particular sabermetric orthodoxy exists. Rob Neyer addresses that question well
here. I want to make a more general point.
The overarching sabermetric philosophy is not related to baseball. The core sabermetric principle, as I understand it, is that baseball must be analyzed as a science. That's it. If your analysis of baseball is scientific, it is sabermetric. Of course, the "gotcha" here is that science is empirical. It is very, very hard to have science that spurns numbers of some kind or another because numbers are the language of empirics. Thus, sabermetrics tends to focus on numbers, on the quantitative over the qualitative.
Sabermetrics does not reject qualitative analysis. There is certainly a role for scouting and experience in baseball. No sabermetrician worth his salt disputes this. However, the use of qualitative analysis cannot be an excuse to flout systematic application of scientific principles. Qualitative analysis still must be backed up by empirical research. It must have a sound empirical basis and it must be vetted empirically.
How might a team do this? A good start is trying to systematically quantify how good your scouts are. Which scouts provide the best reports? How much information do these reports provide beyond what is available statistically? Can scouting data be incorporated into a useful model of player performance?
The point is that no matter what analysis you are undertaking, it must be systematic. You must know, in advance, how new data will inform your thinking. Too often this is not the case. Too often numbers are used ex post facto to provide faux-intellectual cover for unsystematic decisions. Too often numbers are used to confirm preexisting biases of those using them. Too often numbers are ignored when they provide evidence that runs counter to a cherished belief. Too often people use scouting and experience as escape hatches to avoid having to deal with the rigors of systematic, scientific analysis.
This is what causes sabermetricians to go crazy. It's not that we can't deal with scouts. It's not that we don't like baseball stories and anecdotes. It's not that we don't think men with experience have nothing to offer. Far from it. No, the problem is that we cannot stand the unsystematic, unscientific analysis that those in highly visible positions often engage in. It's lazy and worse: it's absolutely wrong. It must be shunned wherever it is found.
Let me close with a quotation from Malcolm Gladwell from
this interview with Bill Simmons:
That's why I'm such a fan of the "Moneyball" generation of baseball GMs: It's not so much that their analytical tools are brilliant ways of predicting baseball success (and I have my doubts, sometimes), it's simply that they have an analytical tool. And when it comes to personnel evaluation, any tool is better than no tool...
Bingo. The merits of any particular tool, whether it be batting average, on base percentage, VORP, or scouting reports, are always up for debate. The important thing is that you have a tool and that you apply it systematically.
**EDIT**
Here is a link to the Ken Rosenthal article that started it all. I like Ken. He does good work. Unfortunately, this article is an example of exactly what I'm talking about above. Ken throws out a bunch of numbers and throws in some other observations for good measure. And the result is... what exactly? How does he propose to use all this information to come up with a decision? Ken doesn't say.
Let me highlight this extended quote:
The first criterion for the award is "actual value of a player to his team, that is strength of offense and defense." Twenty-four of Mauer's 114 starts this season — more than one-fifth — have been at designated hitter, a position that requires no defense. Mauer also trails other candidates in the second criterion, number of games played.
When Mauer first stepped onto the field on May 1, the Twins already were 22 games into their season. Mauer obviously cannot be faulted for needing to recover from offseason kidney surgery, but two other MVP contenders — Tigers first baseman Miguel Cabrera and Jeter — have appeared in 141 and 139 games, respectively. Mauer has appeared in 120.
Am I nitpicking? Perhaps. But Mauer's absence in April, combined with his time at DH, raises the possibility another candidate may — repeat, may — be worthier. It certainly creates the opportunity for debate, which is my entire point.
Gee, if only we had a systematic way to weigh all these factors (playing time, quality of performance, positional adjustments, etc) to come up with an answer to our question! Oh, shit, we do! We have tons of them, and they all originate in sabermetrics.
So is there still room for debate? Of course there is! None of these systems are complete. They all have weak spots. Some are better than others. We can debate the merits of any particular system until the cows come home. The point is that you can't just throw out a bunch of disjointed pieces of information and then pull an answer out of your ass, not if you want to claim any sort of validity to your answer. You must be able to establish ex ante how one can determine who the best player is and then you must let the results of that process, that system, provide you with the best answer.