Repetition Economics: The Story of the Hunter, the Mammoth, and The Wolves

Imagine you’re an early human. When out hunting for food you spot a woolly mammoth in a vulnerable position. Your family hasn’t eaten in days. Much of the tribe is desperate for food as well. Do you attack?

source

The Mammoth.

What would go through an early human’s mind to make this decision?   Obviously, bringing down the beast would be a great quality of life bonus.1 The mammoth would provide an enormous amount of food for your family and tribe. Your reputation as an important member of society would increase. In terms of society at the time, you would be rich. 

However, woolly mammoths are extremely large, dangerous, and strong.  The chances of getting injured, or even killed, are not inconsequential.  If you break a bone and somehow still manage to make it back to the tribe you will become dependent on others for the next few months.  Your reputation as a contributing  member of society will fall.  Others will look less favorably upon you.  Your societal wealth, at least for the near term, would be poor. 

Of course you could end up dead as well so “poor” may not be such a bad outcome.

Repetition also plays into this decision.  Hunting is your way of life.  The decision to hunt a woolly mammoth isn’t a one-off event.  You’re going to be faced with this choice multiple times a month, multiple times a year, and thousands of times in your life.  If there is a 1% chance of being killed by the mammoth, repeatedly trying to take one down alone guarantees future death within a few years.  At a minimum you’re pretty much guaranteed to break a bone within the first year.  Clearly repetition matters to the decision.

The Wolf.

After encountering the mammoth, you come across a deer, and successfully secure a meal for the family for the day.  The deer isn’t the mammoth but still a good catch. One which will feed the tribe and increase your standing in it. However, on the way back to the tribe you come across a strange new animal.

source

Wolves are not indigenous to your tribe’s land.  You’ve never seen one before, and neither has anyone else.  But due to changes in neighboring regions, wolves are migrating and have just entered into your region for the first time. 

The wolf slowly approaches while you’re carrying the deer carcass.  It looks hungry, but having never seen a wolf before, you can’t really be sure.  How do you handle this situation? 

Assuming the wolf behaves like other animals you may offer it the deer — or at least part of the deer — to distract the wolf so it will leave you alone.  Alternatively, you could ignore it and hope it is not aggressive.  It’s your deer and you don’t want to lose part of your catch. 

Will you ever see this strange animal again?  Maybe. Maybe not. 

Life as an early human is hard. How do people actually make these decisions?

Prospect Theory

Human decision making is quite a complicated problem. One of the leading frameworks for how people make decisions is called Prospect Theory. It was was developed by Daniel Kahneman and Amos Tversky (K&T) in the 1970’s and 80’s.

Kahneman and Tversky were not economists, but psychologists. They developed their theories by giving test subjects “games” and evaluating their answers for consistency.

Through these games, they determined people value losses and gains differently.  Losses “hurt” more than gains, and therefore, when people view situations involving risk, they are likely to make decisions that don’t conform with the expectations.   

As an example, they found when given the following choices:

Game 1:

  • 50% chance to gain $1000
  • 100% chance to gain $500

Most people choose the second option of the sure thing2.  The expected value of each option is identical:

  • 50% * $1000 + 50% * 0 = $500
  • 100% *$500 = $500

Yet people clearly preferred by a wide margin the stable choice when it comes to potential gains.

Game 2:

Interestingly, they took the same game, but reduced the guaranteed $500 payoff until they found the point where people were just as likely to take the risky gamble as the guaranteed gamble. They found this point at $370.3

This type of preference is an example of loss aversion. For example, a guaranteed return of $450 is clearly inferior to a 50/50 chance to win $1000. But people on average still prefer the guaranteed $450.

Game 3:

They also checked to see what win/loss odds people were willing to take on a 50/50 coin flip.  They found in order to get someone to risk losing $100 dollars on tails, people needed to win $200 dollars on heads.4

Equations

Kahneman and Tversky state this behavior is because people are risk averse in terms of negative outcomes.  They then take that idea and expand it to identify that people misjudge probabilities, especially at the extremes. Essentially they treat a 2% chance more like a 10% chance, and they treat a 98% chance like a 90% chance (example only).

They get into some heavy math as they evaluate their results of how people judge probabilities, but it is summed up as:

“that π(p) + π(1 − p) < 1 (where π(p) is probability in prospect theory)” . 

(from Wikipedia) , Baron, Johnathan (2006). Thinking and Deciding 

p is the probability of the event happening. In their view a rational decision maker would weigh the sum of p and (p-1) equal to the whole.  This makes some sense as the probability of something happening, plus the probability of it not happening should equal 1.

But people don’t behave this way. In actual practice, they do a very poor job evaluating those probabilities and actually weigh the full sets of results below the whole. This is what the “π(p)” is for, to show the probability is being transformed.

I absolutely think they are correct.   But something is being missed here.  To me, their findings exemplify the difference between the arithmetic and geometric average.

Geometric Mean Hiding in Kahneman and Tversky’s Tests

Their “tests” don’t translate easily to a series of compounding bets. There isn’t enough information to interpret how the bets would compound with repetition, but it is still possible to see the correlation.  I’m going to apply a concept Kahneman and Tversky developed called the “anchoring heuristic” to evaluate these bets from a geometric point of view.  Simply put, the anchoring heuristic means people often anchor to data they see in front of them.  In this case I believe most people when they see a 50/50 bet to win a certain amount of money, automatically assume it is a “bet” (win what you are putting up), and they “anchor” their current wealth to the amount being bet. Therefore we convert:

Game 1 Conversion :

  • 50% chance to gain $1000
  • 100% chance to gain $500

To

  • 50% chance to double your money
  • 100% chance to gain 50% of your money

This test translates to a clear preference to the second option.  The guaranteed option is constant 50% geometric return every time.  The first option is a 41% geometric return ( 2^0.5 x 1^0.5 = 141%).  The second option is clearly superior if the game is repeated and compounded.

Game 2 Conversion:

Here K&T looked to figure out where people switched their preferences from guaranteed gains to a risky bet. I find it interesting that K&T had to reduce the guaranteed payoff to $370 to entice people to switch to the risky bet.  From the framework shown above, the geometric return of the 50/50 bet to win $1000 or $0 is $414 per game.  Somewhat similar.

Game 3 Conversion:

Finally, the required payouts to play a coin flip.  Geometrically, a loss of 50% requires a gain of 100% to stay even. Hence they found people required a loss of half the win to play. Risk $100 to win $200, with the anchored assumption of currently holding $200.  Once again, the findings of Kahneman and Tversky map surprisingly well to someone focused on maximizing their geometric return.

Equation Conversion

And finally, K&T’s formula for probability distortion has an enormous assumption built into it. Their formula, π(p) + π(1 − p) < 1, adds the two potential outcomes together.  However, if the results of the bet are multiplicative, the geometric average entails the multiplication of the two probabilities,

p x (p – 1)

This equation is ALWAYS less than one.  The theory doesn’t need the π() function to represent people’s “probability distortion” when the bets are understood to be multiplicative.  All geometric compounding sequences confirm the “<1” concept. 

System 1 and System 2

Kahneman and Tversky explain that our brains work with two “systems”.  System 1 works very quickly.  It would be analogous to our gut reaction.  System 2 thinks slowly, and can understand complex problems.  System 2 is supposed to be fairly rational.  System 1 is not.  K&T’s overall philosophy says people use system 1 far more than they realize, and often make mistakes when they do.  

Here’s my thoughts: 

Humans evolved a “gut reaction” to match probabilities based on the geometric average, not the arithmetic average because life is about repetition.  This is why the test subjects often got these problems “wrong”, and why economists believe we have behavioral biases.

Mammoth Hunting.

Let’s think about this for a minute.  Most decisions in life are not “one-offs”.  Especially our ancestors’ decisions.  The question of “do I hunt this woolly mammoth or not” was not a once in a lifetime decision.  It would have been repeated hundreds of times in a lifetime.  It’s the concept of: play “Russian Roulette” with 1/100 odds once, you will probably win.  Play the game every day, you have almost no chance of surviving the year. 

Most decisions in life repeat.  Just as the early human had to repeatedly hunt, today, we are still repeating many decisions. 

Estimate your chances of getting into a wreck when running a yellow light.  They are really low. You might get through the light before it changes to red.  There is a slight delay before the perpendicular red light turns green.  And nobody is able to start accelerating right away when the light turns green.  The “expected downside” is really low versus a benefit of getting to your destination sooner.  So why don’t we do it more often?

It’s because we know we are going to be faced with these types of decisions 1000’s of times in our life.  Each time you try and push a yellow light, you’re very likely going to be just fine.  But try and run it 10,000 times, and you’re guaranteed to get into a very nasty accident. 

It’s the future repetition that keeps us from risking the yellow light, just as the expected future repetition of hunting mammoths kept our ancestors cautious. 

I believe when K&T offered these games to people, the subjects’ gut reaction was based on a quick interpretation of risk from a geometric perspective, not an arithmetic one their “rationality” is judged against.  Most of these tests were not presented in a geometric manner, and don’t have clear geometric reference points to judge the correct answer. But that doesn’t mean people aren’t going to use anchoring or some other life event to fill in those gaps when they process the choice.

Many of their questions also “imply” ruin as a potential outcome. Many of the games posed the potential of a “zero” outcome. “Zero” outcomes have a geometric average of zero.  You should be very risk averse to repeating a zero game.

Three Keys to Decision Making

There are three variables5 in decision making:

  1. How much can you win and how often?
  2. How much can you lose and how often?
  3. How many times will the game repeat?

K&T tested the first two very thoroughly.  But as far as I know they didn’t study the third much if at all.

Now you’re going to say, “but K&T were only testing one-off games.  They never indicated the games would repeat”. True they didn’t say it would repeat.  But what is our “System 1” mind going to assume about expected repetition?

The Wolves

Nobody knows the future.  Nobody.  If you’re presented with a new choice that you have never seen before, does it make sense to think you will never see this choice again? Doesn’t it make more sense to assume the choice will repeat?

source

The hunter encountering the wolves has never seen them before.  This may be a one-off encounter.  But we all know that is very unlikely.  The wolves will be back.  They will be back many times.  If the lone hunter had ignored the wolves, he might have been ok this time. After enough encounters of ignoring the wolves they would have finally attacked, and the hunter would be no more.

Life is unexpected and things change.  Always.  The only safe assumption about the future is that any newly encountered decision will likely happen again.  It will repeat. After millions and millions of years of evolution, the human species — and all life forms — would have encountered plenty of new “wolves”.  If they did not handle them properly the species would have been wiped out. 

The only “safe” way to approach a new decision is to assume that the decision will repeat again.

I would assume K&T asked their subjects multiple questions. In the absence of information, K&T’s subject’s gut reaction should guide them toward assumed multiple repetitions. Especially since they were being asked multiple questions. I believe that’s exactly what happened.

The Geometric Average

I believe this is why K&T’s findings above mathematically map well to the geometric average (or log utility if you like utility theory).  This is also why K&T’s prospect theory curve looks just like a logarithm on the right half of their chart. 

The “risk aversion” researchers see in subjects is not an aversion to risk, it’s an evolutionary safety mechanism to ambiguous knowledge about future repetition.  The subjects did not give the wrong answer.  The researchers didn’t realize people are wired to consider repetition in any and all decisions.

Ergodicity Economics and Ole Peters

I used to think this concept was novel.  It’s not, but it’s not widespread.  Ole Peters came up with a similar idea about 12 years ago and called it Ergodicity Economics.  He has been working hard to open economists’ eyes to these flaws.  Replace my emphasis on repetition with time, and you pretty much have his work and theories (his are more thorough). Others besides Ole Peters have noticed this relationship. He wasn’t really the first to contemplate this either.

Others will probably also notice that this view isn’t much different than the leading economic theory of the early and mid-twentieth century: Expected Utility Theory. I brought up utility theory in my post on trend following, as it was Bernoulli’s work on the St. Petersburg Paradox which founded the theory.

Without getting into too much detail, utility theory, says that everyone has their own “utility” that determines the way they react to decisions. Its a bit like the “π()” in Prospect Theory discussed above (although not exactly).  My description above can be summed as saying that everyone’s utility matches a natural logarithm.  If everyone’s “utility” matches a natural logarithm, they are subconsciously working to maximize their geometric return. 

On some levels, this doesn’t seem like a very large leap in knowledge.  But in many ways is monumental and can change the way we view all sorts of modern puzzles and paradoxes.  It’s a super powerful idea.

However, it’s still got problems.

K&T’s Breakthrough

I believe K&T’s most important breakthrough with their research was identifying that people clearly don’t follow a static utility theory.  See the left side of this chart.  A proper utility function would keep on heading down. 

K&T found that people’s decision making fundamentally changes when the outcomes flip from gains to losses. Their utility function seems to flip.

Let’s look at another game to understand why.

Game 4:

  • 50% chance to lose $1000
  • 100% chance to lose $500

This is the exact opposite of game 1 shown above.6  If utility theory was correct, people would stay consistent and choose the stable constant choice of losing $500 to the risky bet.  If pure geometric maximization theory was correct, people would choose the higher geometric return, which is very much implied to be the constant loss of $500.

However, most people choose the randomness. They want the risky option of losing $1000 or holding steady at $0.  Their behavior flipped with the sign of the bet, and utility theory can’t handle this.  This behavior in humans has been replicated in tests over and over again.  We are risk averse to favorable bets, and risk seeking for unfavorable ones. 

This in a nutshell is why prospect theory, and all behavioral economics, was invented.  To deal with the fact that “biases” seem to change.  If everyone’s “bias” was constant, utility theory would work fine.  But they do seem to change, and nobody has figured out why.

But let’s think about it some more…

Back to the Wolves

You decide to throw the wolves a bone, literally, and give up the deer to them.  As hoped, they leave you alone and start to eat the deer.  Oh well, there is still some food at home.  Hopefully you don’t see them again. 

But the next day you catch another deer and on your way back the wolves show up again.  It was pretty clear the way they devoured the deer last time they can be vicious animals so you don’t really want to mess with them.  You lose the deer to the wolves again and leave.  There’s not as much food at home, but there is still some.

Same thing happens again the next day, losing the deer to the wolves. 

And then on the 4th day, when the wolves show up to the hunt again, you’ve had enough.  The food has run out at home.  It’s pretty clear if you keep losing your kills to the wolves you’re going to starve.  You can’t keep repeating this process.  And so on day 4 you decide to roll the dice and fight them off.

You Can’t Repeat Guaranteed Loses Without Dying

I’ve called this post Repetition Economics because everything in life is about repetition.

Repetition, Repetition, Repetition.

If given the choice to repeat a guaranteed loss, or repeat a risky decision of a greater loss versus no harm you end up with two outcomes over time:

  • Repeated guaranteed loss:  dead
  • Risky larger loss versus safe:  mostly dead, but sometimes safe.

Here is the key.  If our hunter had never stood up to the wolves, if he had repeated the guaranteed loss every single day, he would have died of starvation.

If you repeatedly lose $1000 out of your bank account each day, most people will end up broke.  Same is effectively true if you repeatedly lose 2% of your bank account each day. 

If you repeatedly lose 2% of your blood each day, you’re going to die.

If you repeatedly lose 2% of your social status each day, you’re going to become an outcast.

You Can’t Repeat Guaranteed Losses and Survive.

Now if you do take the risky decision: risk 4% of your blood each day, with the chance to lose no blood you will probably die faster.7 However, if many in the species are faced with this decision, at least some will survive. 

If everyone in the species takes the repeated guaranteed loss the species will go extinct. 

Evolutionally, all species must have evolved to take gambles in the face of guaranteed losses.  Those that took guaranteed losses would not be here any longer.

At some point, the wolves would get them.

Repetition Economics

I believe repetition is the key to understanding decision making.  If you include repetition into the equation, human behavior becomes rational.  Biases disappear.  They are not needed. 

Repetition is the key to understanding human decision making.  It is also the key to understanding investing.

Repetition. Repetition.  Repetition.

*I’ve tried to simplify Prospect Theory and Daniel Kahneman and Amos Tversky’s ideas down as much as possible. To learn more about their work see Michael Lewis’s “The Undoing Project” for a high level, easy to read view; Daniel Kahneman’s “Thinking Fast and Slow” for more detail, or go straight to the actual studies, like this one from 1979.

1-I am aware it’s unlikely individual people hunted a mammoth by themselves. But it’s much more fun to write about a mammoth than a smaller animal.

2-See “Thinking Fast and Slow” chapter 26, Problem 3.

3-See “The Undoing Project”, Chapter 10, second page.

4-See “Thinking Fast and Slow” chapter 26, page 284 in my book. About 2/3rds of the way into the chapter. Also see “The Undoing Project”, Chapter 10, second page.

5-There are really four, the fourth being the dynamics of the repetition, i.e., multiplicative or additive or something else. But talking about that now would just muddy the point of the post.

6-See “The Undoing Project”, Chapter 10

7-The key to understanding this mathematically is recognizing the existence of an absorption barrier in nearly every real life system. If you are losing money, there is no such thing as a half cent. Once you get to 1 cent and lose, you’re broke. If your bank account stays below your rent payment for 30+ days, then you will also be broke (or at least late on the rent). Same is true in many things in life. This means that mathematically the risky bet produces a higher average “geometric return” in real life, even though on paper in an idealized world it doesn’t. I’ll discuss this and the importance of a regeneration rate (how fast your heal yourself) in a future post.

2 Replies on “Repetition Economics: The Story of the Hunter, the Mammoth, and The Wolves

  1. Very interesting post. I think privileging geometric returns over arithmetic is a key insight. Two questions, for Game one you say “100% change to gain $500” I think you mean “chance”, 2. I don’t understand how your equation for game 1 represents the 0.5 probabilities by multiplying the two outcomes (We1/Ws)^(1/2)*(We2/Ws)^(1/2) where Ws = 500 I presume and We1 & We2 are the two outcomes. I also don’t see how you get 50% for the other case using this approach. Showing the 50% math would be helpful.

    1. Thanks I thoughti fixed the “changes”guess not.

      With your equation We= 1000. We1=2000, We2=1000

Leave a Reply

Your email address will not be published.