May 12, 2020

Updating your models

"The stock market is crazy right now!" "Stock prices make no sense!" "The market has lost all connection with reality." "The prices of stocks are wrong."

 No. 

The models that lead to these conclusions must be wrong.

I don't know the correct model, but here is what I do know:

I know that the market cannot have lost connection with reality. It is part of reality. It must be that the model that's lost its connection with reality.

The prices can't be wrong. They just don't represent or measure what your model said they do. It's your model that's wrong, not the prices.

If the prices seem to make no sense, it can only be because they don't fit your sense-making model. If you had a model that predicted (maybe not with certainty, but with say, 40% probability) the current prices, you'd say the prices made sense. If your model says 2% or 0.1%, it's likely your model that makes no sense in the current environment.

Do you think the stock market is crazy right now? No, sorry. It's your model that's crazy. And if you keep insisting that your model is right and reality is wrong, then it's you that's crazy.

## Creating and updating models

Reality is complicated, and people make models to make sense of it. They do it by creating a simplified view of reality that they believe represents reality well enough to understand what is happening or predict what will happen. 

To simplify reality, modelers must decide what components of reality have significant causal effects. And they remove from consideration those that they believe have little impact.

Then they propose simplified, sensible mechanisms that link cause and effect.

But they could be wrong about any of this. 

Things that they think are important might not be. Something that they think is unimportant might turn out to be. They may have missed some causal links and overestimated others.

When a model--however sensible--fails to match reality, it's the model that's wrong, not reality.

## The coin flipping game

Consider a coin-flipping game. It costs $1 to bet head or tails. If you guess right, you get $2.00. Wrong you get nothing. 

After examining the coin and seeing nothing strange about it, you start with a model that says that both heads and tails are equiprobable.  I see nothing wrong with that.

You flip the coin. It comes up heads. Do you change your model to say that heads is more likely? Probability 101 says, no. You've learned in school that the fact that it landed heads does not change the future probability distribution.  Your 50:50 model predicts heads half the time, and you got heads, so there's no need to change your model.

But Reality 101 says you should change your model and prefer heads.

What???

## Max and Betty flip coins
Consider Modeller Max and Bayesian Betty. Max studies the problem, develops a model, and sticks with it--unless there's some set of circumstances that the model cannot explain. His model predicts getting heads half the time, so there's certainly no need to change the model the first time he sees heads.

Like Max, Bayesian Betty starts with a probability distribution--the [Prior probability](https://en.wikipedia.org/wiki/Prior_probability)--based on her abstract model. But unlike Max, she considers the model as tentative. She adjusts the probability each time she makes an observation and uses that adjusted probability for the next round.

Max argues that it does not matter whether he bets head or tails. Since the probability is 50:50, he can bet heads every time, tails every time, or make his bet based on a (pseudo) Random Number Generator (RNG).

Betty argues that it does matter what she bets. Whether her initial bet pays off or not is a matter of luck. But she believes that updating her model on each coin flip will give her an edge--if there's an edge to be had.

Max laughs at her for falling prey to the Gambler's Fallacy. He even gives her a link to the [Wikipedia artice](https://en.wikipedia.org/wiki/Gambler%27s_fallacy). She is unmoved.

It turns out that the Gambler's Fallacy is itself a fallacy, hewn to steadfastly by people who have fallen prey to what Nick Taleb calls the [Ludic fallacy](https://en.wikipedia.org/wiki/Ludic_fallacy)

## What does it take to change your mind?
Suppose you flip a coin and get three heads in a row. Most people will say, "my model predicts I will get three heads (or tails) in a row quite often." (The probability of 3 flips all heads is 1/8, and all tails is 1/8. So the probability of three the same in three flips is 1/4. 

So they don't update the model.

What about ten heads in a row? 

People schooled in the Gambler's Fallacy will say: it makes no difference. Heads and tails are both equally probable.

Stop and think for a minute. How many heads in a row would it take to convince you that your model was wrong? 

Think of a number?

Would it be 20? 50? 100? How about 1,000,000? 

I hope that you would not say, "No, the coin is fair, and no amount of evidence would convince me otherwise. My model lets me calculate the probability of 1,000,000 coin flips in a row or even 10E200. These are unlikely but still possible, so my model is valid."

I hope you would pick an actual number and say, "When I see this many heads in a row, I will agree that it's not 50:50. I don't know why, but I'm now convinced my model is wrong."

Let's say your number is 100.

Then, I would ask: "What about 99? What's magical about 100?" 

If that number of heads in a row would convince you that heads and tails were not equally likely, why would one less not persuade you?

## Theory and practice

In theory, a coin that is assumed to be fair is, in fact, fair. 

In practice, a coin that is assumed to be fair may not, in fact, be fair.

A coin is a physical object, not an ideal one. A real coin may not act like an ideal coin. If it's not perfectly balanced, it may land on one side rather than the other.

Whatever process is used to flip is a physical process. Although it's intended to flip it in a way that makes the outcome random, the procedure might introduce some bias. 

But maliciousness or trickery is possible. A real coin doesn't appear just because someone assumes it exists. Whoever picked it might have rigged the coin or the flipping mechanism. 

In theory, the Gambler' Fallacy is a fallacy. 

In practice, it's a false fallacy.

## Expected payoffs

Modeler Max uses a Random Number Generator to choose his bets. His expected winnings are zero, whether the coin is biased or not. Max bets heads roughly half the time and tails roughly half the time. So fair coin or not, he can expect to break even.

Bayesian Betty uses her Bayesian model to guide her bets. If the coin is fair, Betty will break even. But if Betty the coin is unfair, Betty's model will reflect that, and she will win. The more unfair the coin, the more she'll win. 

## The probability that fair is fair
Max's model does not consider this important detail:  what _is__ the probability that a game, like coin flipping, or roulette, or dice is fair? 

Max assumes that if it's said to be fair, it's fair.

Intuitively we know this is wrong the wrong thing to assume.

Suppose a guy walks up to you in a bar offers the following bet: he'll flip a coin ten times that he's pulled out of his pocket. If it comes up tails even once in those ten tries, he'll pay you $100. Only if it comes up heads ten times in a row, will you have to pay him $100. 

Would you take the bet? 

I wouldn't.

I have a prior that says: "If anyone in a bar offers you to make a bet that you think he's unlikely to win, the probability of him winning is 100%."

What if he lets you inspect the coin?

My prior says, "The probability is 100% that he is better at concealing the way he's going to win his rigged bet than I am at discovering it." So, no.

I don't think you would either.

## Back to the market
So what would explain the current state of the stock market relative to the state of the economy? Where might our models be wrong?

(Again, they must be wrong if they don't predict reality. We need to discover what models might predict what we see.)

The coin-flipping case illustrates a common failing: what assumptions are we making that are not true?

I'll offer a couple of possibilities in the next post. 

I'll lead with one that I heard from Andrew Yang today on his podcast interview with Sam Harris: 

"And you can see it in what the stock market is saying where when people are announcing record layoffs, their prices go up, though the stock values go up because investors know that if you can shrink your workforce, then the returns on capital will be higher."

"Yes," you might say, "but those laid-off workers are consumers. The fact that they are laid off **will** reduce their buying power. As a result, these companies **will** do less well? Investors **should** realize that. So the stock prices **should** go down, not up."

If that's what you are thinking, my answer is: "Each of those "shoulds" and "wills" is a prediction of your model, not necessarily a fact about reality. Some might be correct. But if reality doesn't match your model, your model is wrong."


<a href="https://feedburner.google.com/fb/a/mailverify?uri=70YearsOldWtf&amp;loc=en_US">Click here to subscribe to 70 Years Old. WTF! by Email</a>

No comments:

Post a Comment

Pages