Saturday, January 8, 2011

The Paradoxes of Rational Choice Theory

Rational Choice Theory is the current basis for most of microeconomics.  Its major feature is rationality, the idea that people want more than less and when given a choice between options, will inevitably choose the option that satisfies more of their preferences than the other available options.  Intuitively, the theory makes a great deal of sense.

The obvious objection is that people don't bother to calculate all the costs and benefits of every situation.  This is explained away by the "pool table idea;" in the same way that you don't need to be a physicist to play pool yet the balls will still follow the laws of physics, you don't need to be an economist to make rational choices.

The other typical objection is that some people seem to make decisions that, no matter how you cut it, goes against their best wishes.  The explanation for this is that since we can't possibly know a person's individual preferences, we must assume that they are rationally pursuing them.  By this point, Rational Choice Theory becomes a tautology - after all, if people are all rationally pursuing their own preferences, preferences which can't be known, it's impossible to empirically test whether or not they are actually doing so, and Rational Choice Theory is therefore true because it's defined to be.

That said, there are a few paradoxes which point us towards weaknesses in Rational Choice Theory.

St. Petersburg Paradox

Imagine you are offered the following bet:  A coin will be flipped. If it is heads, you get nothing. If it is tails, you get $2 and the opportunity to flip again.  If on the second flip, you get a heads, you keep the $2. If you get tails, you get $4.  Next flip, $8, then $16, then $32, and so on.

How much pay to take part in this gamble?

According to Rational Choice Theory, you would pay anything less than the expected value of the gamble. After all, everyone desires more than less and if you keep playing the game over and over again,  your average winnings will be the expected value.

To calculate the expected value, all you need to do is multiply the percentage chance of winning a certain amount by that amount and then sum all possibilities. The result will be your average winnings.

So, we know we have a a 50% chance of getting heads on the first flip.  So that's 50% times nothing.  Easy so far.  50% chance of getting tails. But wait. 50% of the times you get tails, you'll go on to win again, so, we can only add 50% of that initial 50% times the $2 winnings.  Easy enough, .5 x .5 x $2 = $0.50.  But then there's the next set:
.5 x .5 x .5 x $4 = .125 x $4 = $0.50
and the next:
.5 x .5 x .5 x .5 x $8 = .0625 x $8 = $0.50
and so on:
5 x .5 x .5 x .5 x .5 x $16 = .03125 x $16 = $0.50
5 x .5 x .5 x .5 x .5 x .5 x $32 = .015625 x $32 = $0.50
5 x .5 x .5 x .5 x .5 x .5 x .5 x $64 = .00778125 x $64 = $0.50
and so on ad infinitum. The series diverges.  The expected value equals infinity x $0.50.

So, the rational person would bet literally any amount of money to take part in this gamble.
But people don't bet any amount of money on this.  They bet quite a bit less.

There are several other explanations for this, among them utility curves, risk aversion, people's awareness that there is a limited amount of money in the world, etc.

Perhaps the most convincing theory I've read that explains this paradox is this paper by Benjamin Hayden on the "median heuristic." The paper sets forth the idea that people don't always rely on Rational Choice Theory (or perhaps never rely on it) and instead suggests that people make their decision in this particular paradox by picking a bet close to what the median return from the bet is.  This is very different from betting on the expected value.  The expected value is roughly the mean return from the bet and is significantly larger than the median due to the skew created by the occasional absurdly long streak of tails.  The other noticeable thing about the median is that it is far easier to accurately estimate.  Finding the mean requires to sum all results and then divide by the total count. Finding the median just requires taking a stab at the middle number, and with larger samples, if you're off, it's not by much.  Considering that the mind often seems to prize efficiency over accuracy and precision, this heuristic seems especially likely.

This new median heuristic predicts that people will make a bet of about $1.70 some odd, which they found widely predicts what people actually do.

Allais Paradox

Now, having lost all of your money making bad bets on the St. Petersburg Paradox, you are offered a new game.

There are two urns, A and B.
A contains 99 black marbles and 1 red marble.
B contains 90 black marbles, 5 white marbles, and 5 red marbles.
If you pull a black marble, you get $1 million.  If you pull a white marble, you get $5 million.  If you pull a red marble, you get nothing.

You can only play once, so going for the expected value doesn't make sense in this case. It's solely a matter of personal preference. Which do you choose?

Now, onto a completely new game.
There are two new urns, C and D.
C contains 9 black marbles and 91 red marbles.
D contains 5 white marbles and 95 red marbles.
Again, if you pull a black marble, you get $1 million.  If you pull a white marble, you get $5 million.  If you pull a red marble, you get nothing.

Which do you choose?

One of my economics professors posed this question for a class I was in.  If I remember correctly, A and D was the most popular choice, followed by B and D and B and C.  A and C was the least popular.

But Rational Choice Theory predicts that everyone will choose A and C or B and D, and that no one will choose A and D or B and C. Let me show why.

90% of A and B are identical.  In both cases, 90% of the time you get $1 million.  The only real question is: for the remaining 10%, do you want a 50% chance of $5 million or a 90% chance of $1 million?

Likewise, 90% of C and D are identical.  In both cases, 90% of the time you get nothing.  The only real question is: for the remaining 10%, do you want a 50% chance of $5 million or a 90% chance of $1 million?

If you prefer a 50% chance of $5 million, you should always prefer a 50% chance of $5 million.  If you prefer a 90% chance of $1 million, you should always prefer a 90% chance of $1 million.  The fact that these preferences don't hold shows that Rational Choice Theory is flawed some way.

I haven't read an explanation for this paradox that I have found satisfactory.  One idea that I think might explain it somewhat is that people think less in terms of probabilities and more in terms of "is or is not."  Under that hypothesis, people would choose A over B, because A is a sure a thing while B is less sure.  Meanwhile, people would choose D over C, because while both are almost certain not to happen, D gives a larger benefit if it actually does happen. For those of you who pointed out that this is really similar to rank-dependent expected utility, please hold your fire until after the end of the Ellsberg Paradox, where I readdress this issue.

I would like to emphasize that the above stated hypothesis is completely untested.

Ellsberg Paradox

Interesting side note: the Ellsberg Paradox is named after Daniel Ellsberg, who wrote about it in his Economics Phd dissertation.  Ellsberg is far more famously known, however, for being the military analyst who released the Pentagon Papers in 1971.

For this example, imagine an urn.  The urn contains 90 marbles, 30 marbles are black and the other 60 are yellow and red, but the ratio between the two is unknown.  You can choose one of two wagers.  Either a million dollars if a black marble is pulled or a million dollars if a yellow marble is pulled.

Now imagine the same urn.  90 marbles, 30 black, other 60 red and yellow. You still don't know the proportion, but it's the same as the time before.  You can choose of two wagers. Either a million dollars if a black or red is pulled or a million dollars if yellow or red is pulled.

Which did you choose?

As you can probably imagine by now, what people usually choose does not reflect what rational choice theory predicts.  Rational Choice Theory* says that if you pick black in the first example, then you must think there are fewer than 30 yellow marbles, so that in the next example you would pick black or red.  Likewise, if you picked yellow, you must think there are more than 30 yellow marbles, and so you'd pick yellow or red.

*Technically, it's Expected Utility Theory.  That said, they operate under the same basic considerations, and for the purposes of this blog post, will be assumed to be the same.  Real economists, please don't chew my head off for this.

What you would not pick is black for the first choice and yellow and red for the second choice, since that  assumes that there are more red than yellow in the first instance but more yellow than red in the second.

Here's the explanation I have: People prefer the sure choice against the unknown.  So, most people will choose black in the first set because they know that the chances of getting one million dollars is exactly 1 in 3, while if they choose yellow it could be anywhere between zero and 2 in 3.

Likewise, if people choose yellow and red in the second set, then they have a 2 in 3 chance, while if they choose black and red it could be anywhere between 1 in 3 and certain.

The trouble is then, why do people sometimes not choose black for the first instance and yellow and red for the second instance? I'm stumped. I guess there is some compromise between the risk aversion and the desire for optimal output (what Rational Choice Theory) predicts, but I can't think of anything that would accurately predict what people do.


Note: the following is information poorly understood by the author. It's validity and accuracy is questionable.
Now, there was a paper written in 1986 that attempts to explain the Ellsberg Paradox and, less directly, the Allais Paradox by Uzi Segal, then an Assistant Professor at the University of Toronto, now a Professor at Boston College.

I have read the paper, entitled "The Ellsberg Paradox and Risk Aversion: An Anticipated Utility Approach," but I must admit that I don't yet understand the math behind it. Therefore, I apologize in advance for the inevitable mistakes that follow.  The layman's version goes something like this:

Risk aversion is where people prefer a certain value, even if it is lower than an expected value, to a higher expected, but uncertain value.  Ambiguity aversion is the preference for known risks to unknown risks.  According to Segal, ambiguity aversion and risk aversion are essentially the same thing within the realm of Anticipated Utility Theory*.

From my understanding of the paper, the theory is not dissimilar to my poorly stated hypothesis for the Allais Paradox and the other hypothesis I gave for the Ellsberg Paradox. Simply put, people tend to regard chance less in terms of probability and more in terms of certain and uncertain.  I realize this is something of a cop out, and sometime in the future after I have had understood the math behind Segal's paper, I intend to write another post outlining rank-dependent expected utility.  Until then, I hope this post has done an adequate job of outlining some of the sketchy parts of Rational Choice Theory.

*Anticipated Utility Theory, now known as rank-dependent expected utility, provides an explanation for why people engage in the seemingly contradictory behavior shown above in the Allais Paradox and in the Ellsberg Paradox.  Its addition to Prospect Theory resulted in Tsverky's and Kahneman's 1992 paper on Cumulative Prospect Theory.  The development of the theory, an important advancement in Behavioral Economics and a strong alternative to Rational Choice Theory, resulted in Kahneman winning the 2002 Nobel Prize.  Tsverky would likely have also won had he not died in 1996. If you ever get the chance, watch his nobel lecture.

No comments:

Post a Comment