One of the things that always strikes me about statistical analysis is that it usually involves talking about baseball in the realm of the infinite. While this practice is theoretically correct, the thing that makes baseball interesting is that it is played in a finite series of events. A team does not have an infinite amount of time to allow its true talent to come through. It has only 162 games.
The result of this is that as soon as we start playing games, teams start accumulating "error" in terms of their actual performance versus their true talent or expected performance. In turn, the result of this is that it is actually quite easy for teams to vary wildly in their deviations from their expected performance.
(Keep in mind that when I talk about expected performance the assumption is that the expectation is accurate. I know
For example, let's say I have a coin that I will be flipping 10,000 times. We expect that this coin, if it is a fair coin, will come up heads 5000 times and tails 5000 times. Why? Because a fair coin is expected to come up heads half of the time and tails half of the time and I have 10,000 remaining flips.
So I make the first flip and it's heads. Two things happen immediately. First, and least consequential, is that I know have the world's smallest sliver of a fraction of a shadow of a glimmer of a doubt that the coin is biased towards heads. Naturally, this evidence would have a confidence level bordering on zero.
Secondly, and more importantly, my expectations have changed. Previously, I was expecting 5000 heads and 5000 tails. Now, I'm expecting 5000.5 heads and 4999.5 tails. Why? Because a fair coin is expected to come up heads half of the time and tails half of the time and I have 9,999 remaining flips.
Remember, the coin had to come up either tails or heads, and it is by no means surprising that it came up heads. Nevertheless, our expectation now must be different. Probability is only useful in discussing unknown outcomes. It has no role in discussing known outcomes. Information is highly influential, and therefore valuable, in probability.
This happens a lot in baseball too. The Yankees have grossly underperformed this year. We would expect them, base on runs scored and runs allowed, to be ahead of the Red Sox, not behind them. However, they don't get those "flips" back. That error from their true talent level has accumulated and now we have to adjust our prediction of their overall record downward, even though the team might not be at all worse than our original expectation. Crazy, huh? I think so.
Look at it in terms of a three game series against Baltimore. We would expect the Yankees to win about 1.8 of the games in that series (number derived from the patented "pulling a number out of my ass" technique). There's only one problem: you can't win 1.8 games. So after that series, the Yanks will be guaranteed to have overperformed or underperformed. If they underperformed, they can't expect to recoup that loss. If they have overperformed, that's money in the bank.
As with the coin flip, it's also true that the results of the team's games will influence our determination of their true talent. However, the sample needed before this starts having a noticeable effect is really, really large.
Anyways, the corollary to all of this is my (only?) favorite John Sterling cliché: you're never as good as you look when you're winning and you're never as bad as you look when you're losing.
So true. In fact, we've just proven it.
(The more theoretically minded will note that, other than perhaps having to adjust our "true talent," these results are still meaningless when we are making an infinite number of flips or playing an infinite number of games. No matter how many heads in a row I get, if the coin is known to be fair, the expected ratio of heads to tails after an infinite number of flips is always exactly 50/50.)
**EDIT** Fixed a really dumb typo.
No comments:
Post a Comment