Saturday, January 30, 2010

It's prediction season... I want to link back to this old post of mine addressing the right way to interpret predicted performance. I'm sure some brilliant writer out there on the Internet can express this concept more clearly than I can, but I've not seen that article yet, so you're stuck with me linking to myself.

Tuesday, January 26, 2010

What caused the "Steroid" Era?

That's the question that J.C. Bradbury asks in this post. If you've followed my thoughts on steroids here, you'll be familiar with his answer. Nonetheless, his post is worth reading as a concise, focused, and effective primer on what happened and why in the 1990s.

Saturday, January 23, 2010

Valuing wins

Baseball Prospectus' Matt Swartz posted a fantastic article this week on the valuation of player contracts in Major League Baseball:
If baseball free agents were in a typical, perfectly competitive market like those you see in the first chapter of your introductory economics textbooks, the price per win would have to be linear. Basic economic theory of perfectly competitive markets would say that anything other than the same price for all wins would create arbitrage opportunities where teams could perpetually trade their way to the top of the league.
Later on:
In the case of baseball free agents, there are two main reasons why the baseline’s assumptions don’t apply. First, these markets aren't thick enough that teams can sign and trade players so easily and quickly swap out players for others like investors can do with shares of Microsoft. There are only so many teams, and there are limits to making this kind of move in general. Second, you can't employ 60 Garret Andersons on a team and suddenly become the best team in baseball. There are only 25 roster spots, and only so many players can realistically get enough playing time to realize their true value.
This a subject that I've given a lot of thought lately, to the point where I may actually do some original research on the matter in the near future.

In any case, my intuition is that teams should pay disproportionately more for a seven win player than a five win, three win, or one win player. I have a hard time seeing a team trading Alex Rodriguez for ten or twelve players barely above replacement level unless they are also saving money in the deal. I think Matt's reasoning is spot on: there are only twenty-five roster spots. You can't show up with sixty one WAR players and expect to make the playoffs.

That being said, I think there are really two questions here:
  1. What do teams pay for wins?
  2. What should teams pay for wins?
Any approach that uses existing contracts as its basis, like Matt's, is going to end up answering question number one. I'm really more interested in question number two. Unfortunately, answering question number one is much easier: we have a whole lot of data on how players performed (and how they were expected to perform) and what they were paid. The question is essentially positive. Question number two is a normative question that involves a variety of assumptions about why teams pay players. I'm not going to delve into those here.

I'm not sure if Matt's article is subscriber only or not, but if it isn't, it is worth your time.

Twenty-five days until pitchers and catchers report!

**EDIT** I had originally misspelled Matt's last name. I've fixed that.

Monday, January 18, 2010

A brief basketball note

I'm not a huge basketball fan, but I do enjoy the occasional University of Michigan game. This was the case on Sunday when I watched Michigan beat the University of Connecticut. It was a very good game and an important win for U of M.


After the game was over, University of Michigan fans rushed the court to celebrate. This is embarrassing. The University of Michigan is not some small town university that just pulled of a once in a lifetime upset. We are large university that has no excuse for not being at least a minor contender in any major sport year in and year out.* That the basketball program has suffered as it has over the last decade is shameful.

UConn was ranked 15th. 15th! The University of Michigan itself was ranked that highly earlier this year. This is a Michigan team that is viewed to be underachieving! We made the second round of the NCAA tournament last year! More was expected this year. This wasn't some scrappy underdog rising up to take down an elite, once-in-a-lifetime, superstar team. It wasn't even a program in an off year beating a hated rival. It was a non-conference game at home against a good quality opponent. We should expect to beat a team like UConn. Indeed, here is the AP's first sentence in their writeup (emphasis added):
Michigan looked like the team it expected to be while No. 15 Connecticut struggled -- again.
Rushing the court sends the message that you viewed your team as massively inferior to the team you just beat. It says that you view the victory as one of the greatest victories in school history. It is something that should be reserved for wins that define an entire program, not wins that ought to be a matter of course for a team that should be in the NCAA tournament more often than not.

I sincerely hope that Michigan fans, especially the student fans, will learn how to behave as if they root for a team that aspires to be "the leaders and best," not just a bloated, underachieving, has-been in college basketball.

* Here pause and weep for the football program.

Saturday, January 16, 2010

On making predictions

If you're up for reading an economics post from the most verbose writer on the Internet, try this post from Scott Sumner.
Yes, but if we don’t have standards, if we aren’t going to hold people to their words, then what do we really have? Suppose I said; “I predict a major 20% to 30% drop in the S&P500 within the next 4 years. And if it doesn’t happen, but happens sometime later, I should still get credit.” Would you take me seriously? People don’t seem to understand that unless a prediction is both accurate and timely, it really isn’t of much value.
I would phrase this differently. I would say that the timeliness of a prediction is part of its accuracy.

If I predicted every year from 1919 through 2004 that the Boston Red Sox would win the World Series, should anyone give me credit for a successful prediction in 2004 despite being wrong in each and every other year?

The answer, of course, is that it depends. If my predictions were based on an objective model with independently demonstrated accuracy, then, yes, I should be given credit. Of course, this is an extremely unlikely scenario, since we would probably never be able to identify a model that fails in a such a spectacular fashion as this one as actually being accurate. Nevertheless, it's the process that's important, not the results, because the process is what we can control ex ante. If I am known to have the correct (or most correct, given available information) prediction process, then the results don't matter; I have made the best prediction I could.*

On the other hand, if my predictions are unsystematic and wildly subjective, then I should be vigorously laughed at for making such predictions. I should receive no credit for a successful prediction in 2004. As the saying goes, "Even a stopped clock tells the right time twice a day." We don't give the stopped clock any credit for this.

I think most people know this intuitively, but I think a couple things happen that cause people to take unsystematic predictions more seriously than they ought to:
  1. We have a cognitive bias that causes us to remember successful predictions more often than unsuccessful predictions. Unsuccessful predictions are everywhere. Successful predictions, especially successful predictions of really spectacular events, stand out to us. We then grant undue expert status to the successful predictor causing us to overweight his analysis in the future.
  2. We are fooled into perceiving patterns and systems where none exist, giving us the illusion that we are operating systematically. For example, there were people who argued that the Yankees would never win a World Series with Alex Rodriguez for any number of reasons. Most if not all of the reasoning in these predictions involved extrapolating from small sets of data to proclaim large significant patterns. We see how well that worked out.
So you have to have a system, but you also have to let the system speak for itself. You can't say, "Well, I predicted the Red Sox would win sometime between 2000 and 2003, but they won in 2004... hey, I was pretty close!" If your system did not predict this, then you cannot call it a success because your system provided no useful information ex ante.

It's important to keep all of this in mind when you hear experts going on and on making wild predictions, whether those predictions are about baseball or economics. We must insist that we operate with in an objective, systematic framework or we will find ourselves falling victim to a whole host of epistemological charlatans and stopped clocks.

* Of course, the best way to evaluate or process is by the results it produces. A process that consistently produces poor results must eventually be rejected. The key is that any one outcome of a process is not sufficient evidence. We must have a large sample of unbiased outcomes before we can make a correct determination on the efficacy of a particular process.

**EDIT** See also: Robin Hanson. He's talking about economics, but the lessons are applicable everywhere.

Monday, January 11, 2010

Let's talk some baseball

I'm starting to get my baseball juices flowing again. So here's some notes on some stuff that happened at some point in the past month or three:
  • The Hall of Fame gained a new member: Andre Dawson. I probably would not have voted for Dawson. Indeed, I've written against his candidacy in the past. Still, it isn't an atrocious choice. Nonetheless, the most positive development of this year's voting is that Bert Blyleven is now only five votes shy of enshrinement. It's highly likely he'll make it in next year and pass the title of "Most Clearly Deserving Player Not In The Hall of Fame" to someone else.
  • Mark McGwire admitted to using steroids during his career. I wish Mark would have said this to Congress five years ago. I don't know why he felt he could not. Nonetheless, his statement is one of the better ones you'll read on the subject, keeping in mind all of the problems that plague these sort of things. Even though it's been suspected for a long time that he used, it was good to see him come forward more or less of his own volition. See also: Rob Neyer's take on the situation.
  • The Yankees traded for Curtis Granderson. There's not much to say: in short, it was, in baseball terms, a good deal for New York, though the nature of it did prolong my apathy by a few more days. I have two essentially incompatible wishes: for the Yankees to win a gazillion World Series and for them to do it with a ton of homegrown players. And since the latter wish is dependent on the former, they might as well do their best at winning as many World Series as they can and let my feelings on the nature of their victories sort themselves out.
  • Ditto for Javier Vazquez, though I will root harder for him than Granderson, given his shoddy treatment in New York the first time around.
  • There are only thirty-seven days until Christmas in February. I cannot even begin to tell you how much I'm looking forward to watching a baseball game with no implications other than the sublime meaning in taking the extra base, turning a double play, hitting a triple into the gap, painting the corner with wicked curveball, drinking an extra beer, putting Stadium Mustard on a grilled Kosher hot dog, and doing nothing all weekend but basking in baseball, glorious baseball.