May 12, 2020

Updating your models

"The stock market is crazy right now!" "Stock prices make no sense!" "The market has lost all connection with reality." "The prices of stocks are wrong."

 No. 

The models that lead to these conclusions must be wrong.

I don't know the correct model, but here is what I do know:

I know that the market cannot have lost connection with reality. It is part of reality. It must be that the model that's lost its connection with reality.

The prices can't be wrong. They just don't represent or measure what your model said they do. It's your model that's wrong, not the prices.

If the prices seem to make no sense, it can only be because they don't fit your sense-making model. If you had a model that predicted (maybe not with certainty, but with say, 40% probability) the current prices, you'd say the prices made sense. If your model says 2% or 0.1%, it's likely your model that makes no sense in the current environment.

Do you think the stock market is crazy right now? No, sorry. It's your model that's crazy. And if you keep insisting that your model is right and reality is wrong, then it's you that's crazy.

## Creating and updating models

Reality is complicated, and people make models to make sense of it. They do it by creating a simplified view of reality that they believe represents reality well enough to understand what is happening or predict what will happen. 

To simplify reality, modelers must decide what components of reality have significant causal effects. And they remove from consideration those that they believe have little impact.

Then they propose simplified, sensible mechanisms that link cause and effect.

But they could be wrong about any of this. 

Things that they think are important might not be. Something that they think is unimportant might turn out to be. They may have missed some causal links and overestimated others.

When a model--however sensible--fails to match reality, it's the model that's wrong, not reality.

## The coin flipping game

Consider a coin-flipping game. It costs $1 to bet head or tails. If you guess right, you get $2.00. Wrong you get nothing. 

After examining the coin and seeing nothing strange about it, you start with a model that says that both heads and tails are equiprobable.  I see nothing wrong with that.

You flip the coin. It comes up heads. Do you change your model to say that heads is more likely? Probability 101 says, no. You've learned in school that the fact that it landed heads does not change the future probability distribution.  Your 50:50 model predicts heads half the time, and you got heads, so there's no need to change your model.

But Reality 101 says you should change your model and prefer heads.

What???

## Max and Betty flip coins
Consider Modeller Max and Bayesian Betty. Max studies the problem, develops a model, and sticks with it--unless there's some set of circumstances that the model cannot explain. His model predicts getting heads half the time, so there's certainly no need to change the model the first time he sees heads.

Like Max, Bayesian Betty starts with a probability distribution--the [Prior probability](https://en.wikipedia.org/wiki/Prior_probability)--based on her abstract model. But unlike Max, she considers the model as tentative. She adjusts the probability each time she makes an observation and uses that adjusted probability for the next round.

Max argues that it does not matter whether he bets head or tails. Since the probability is 50:50, he can bet heads every time, tails every time, or make his bet based on a (pseudo) Random Number Generator (RNG).

Betty argues that it does matter what she bets. Whether her initial bet pays off or not is a matter of luck. But she believes that updating her model on each coin flip will give her an edge--if there's an edge to be had.

Max laughs at her for falling prey to the Gambler's Fallacy. He even gives her a link to the [Wikipedia artice](https://en.wikipedia.org/wiki/Gambler%27s_fallacy). She is unmoved.

It turns out that the Gambler's Fallacy is itself a fallacy, hewn to steadfastly by people who have fallen prey to what Nick Taleb calls the [Ludic fallacy](https://en.wikipedia.org/wiki/Ludic_fallacy)

## What does it take to change your mind?
Suppose you flip a coin and get three heads in a row. Most people will say, "my model predicts I will get three heads (or tails) in a row quite often." (The probability of 3 flips all heads is 1/8, and all tails is 1/8. So the probability of three the same in three flips is 1/4. 

So they don't update the model.

What about ten heads in a row? 

People schooled in the Gambler's Fallacy will say: it makes no difference. Heads and tails are both equally probable.

Stop and think for a minute. How many heads in a row would it take to convince you that your model was wrong? 

Think of a number?

Would it be 20? 50? 100? How about 1,000,000? 

I hope that you would not say, "No, the coin is fair, and no amount of evidence would convince me otherwise. My model lets me calculate the probability of 1,000,000 coin flips in a row or even 10E200. These are unlikely but still possible, so my model is valid."

I hope you would pick an actual number and say, "When I see this many heads in a row, I will agree that it's not 50:50. I don't know why, but I'm now convinced my model is wrong."

Let's say your number is 100.

Then, I would ask: "What about 99? What's magical about 100?" 

If that number of heads in a row would convince you that heads and tails were not equally likely, why would one less not persuade you?

## Theory and practice

In theory, a coin that is assumed to be fair is, in fact, fair. 

In practice, a coin that is assumed to be fair may not, in fact, be fair.

A coin is a physical object, not an ideal one. A real coin may not act like an ideal coin. If it's not perfectly balanced, it may land on one side rather than the other.

Whatever process is used to flip is a physical process. Although it's intended to flip it in a way that makes the outcome random, the procedure might introduce some bias. 

But maliciousness or trickery is possible. A real coin doesn't appear just because someone assumes it exists. Whoever picked it might have rigged the coin or the flipping mechanism. 

In theory, the Gambler' Fallacy is a fallacy. 

In practice, it's a false fallacy.

## Expected payoffs

Modeler Max uses a Random Number Generator to choose his bets. His expected winnings are zero, whether the coin is biased or not. Max bets heads roughly half the time and tails roughly half the time. So fair coin or not, he can expect to break even.

Bayesian Betty uses her Bayesian model to guide her bets. If the coin is fair, Betty will break even. But if Betty the coin is unfair, Betty's model will reflect that, and she will win. The more unfair the coin, the more she'll win. 

## The probability that fair is fair
Max's model does not consider this important detail:  what _is__ the probability that a game, like coin flipping, or roulette, or dice is fair? 

Max assumes that if it's said to be fair, it's fair.

Intuitively we know this is wrong the wrong thing to assume.

Suppose a guy walks up to you in a bar offers the following bet: he'll flip a coin ten times that he's pulled out of his pocket. If it comes up tails even once in those ten tries, he'll pay you $100. Only if it comes up heads ten times in a row, will you have to pay him $100. 

Would you take the bet? 

I wouldn't.

I have a prior that says: "If anyone in a bar offers you to make a bet that you think he's unlikely to win, the probability of him winning is 100%."

What if he lets you inspect the coin?

My prior says, "The probability is 100% that he is better at concealing the way he's going to win his rigged bet than I am at discovering it." So, no.

I don't think you would either.

## Back to the market
So what would explain the current state of the stock market relative to the state of the economy? Where might our models be wrong?

(Again, they must be wrong if they don't predict reality. We need to discover what models might predict what we see.)

The coin-flipping case illustrates a common failing: what assumptions are we making that are not true?

I'll offer a couple of possibilities in the next post. 

I'll lead with one that I heard from Andrew Yang today on his podcast interview with Sam Harris: 

"And you can see it in what the stock market is saying where when people are announcing record layoffs, their prices go up, though the stock values go up because investors know that if you can shrink your workforce, then the returns on capital will be higher."

"Yes," you might say, "but those laid-off workers are consumers. The fact that they are laid off **will** reduce their buying power. As a result, these companies **will** do less well? Investors **should** realize that. So the stock prices **should** go down, not up."

If that's what you are thinking, my answer is: "Each of those "shoulds" and "wills" is a prediction of your model, not necessarily a fact about reality. Some might be correct. But if reality doesn't match your model, your model is wrong."


<a href="https://feedburner.google.com/fb/a/mailverify?uri=70YearsOldWtf&amp;loc=en_US">Click here to subscribe to 70 Years Old. WTF! by Email</a>

May 9, 2020

Models and sense-making

A friend said this in a forum we both post to:

>The stock market is behaving unusually for the current economic climate in the US / the world.  Why is it going up - while businesses and the economy looks to be degrading.    This doesn't make much sense.

To say that market is not "behaving unusually" means that you don't have the right model and you using the model, not reality, as the standard.

If you had a more accurate model, you'd see that it was "behaving correctly."

To say, "This does not make much sense" means that reality does not conform to your model. If you had a useful model, what was happening would make senes.

Responding with "my model is wrong" starts the search for a correct model. Any other choice defends the current model.

Lots of people are saying the same kinds of things because they have the same sorts of model. This is confirmation bias for their models. A better strategy is acknowledging that the mode is wrong, not reality, and finding a model that makes reality comprehensible.

What's a better model? I don't know. I have some clues. But that's a different discussion.

This is about what I think is the right move when "things don't make sense" or "seem unusual" or "seem crazy."

You take that as evidence that your model is wrong, and--if you have the time and interest and maybe skill--you look for a better model.
.
## Does a better model exist?

Either (a) there can be a model that explains the current behavior of the market based on a set of measurable parameters, or (b) no such model can exist. 
 
To choose (b) is equivalent to saying, "there is no causal basis for what now exists." I think that's wrong. 
 
It could be that the causal structure is so complicated that modeling is theoretically possible, but pragmatically impossible. 
 
That might be true.

My belief without foundation is that things that happen have causes, and you can build a model that abstracts at least some of that information and provides insight--even if you can't make an exact prediction, you can still make a model. 
 
So I will assert that a model can exist. 

But that's just my belief about the orderly nature of the universe.
 
I don't know the model. Maybe nobody knows it, but I would not be surprised to learn that some people have figured out some parts of it and are making money. I would also expect that other people have made lucky guesses and are making money, too, but that will come to an end when their luck runs out.
 
Or maybe not. 
 
I am trying to argue that when tempted to say things, as in the Vox article "the market makes zero sense" and "nobody knows what's going on" that the better answer is "I don't understand the market because I don't have a model" or "the people I know don't have good models." 
 
I don't claim I know the right model. But I have collected some ideas from other people and written about them, and using those models, my view is closer to "I see some reasons that this might be happening" rather than "this makes no sense." 
 
In any case, my point here, to reiterate: when something seems crazy, it's your model that's wrong. Not reality.

May 8, 2020

Disinfecting my mind: a case study

Earlier today, I wrote a post, Defending our minds because the day before a friend posted a link to an article.

Don’t read the article

I encourage you not to read it. It will make you stupider.
Even reading the headline and the parts I’ve excerpted—each followed by an explanation of what’s wrong with it—risks making you stupider.
Sorry. But I thought it might be worth following my prescriptive post with a practical example of the harm that can come from wasting time, reading a shitpost, and the harm that might come from sharing it.
Before the article, the facts.
He’s the leader of the Imperial_College_COVID-19_Response_Team. There are about 30 people on the team.
The Response Team has so far produced 30 papers. The most significant, concerning government policy, is probably Report 9 - Impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand with 32 co-authors, and Ferguson as the lead.
He’s a member of the Scientific Advisory Group for Emergencies. There are 22 other members, most with relevant science backgrounds.
At the end of March and the first week in April, a woman named Anita Staats traveled across London to spend time with him. He’s 51 and married. She’s 38 and married.
The article in question comes from ZeroHedge. If you know ZeroHedge, none of this will surprise you.
Stop for a minute.
What’s the likelihood that this article will make you any wiser, better informed about things that matter?
I didn’t stop. Someone who I like and respect had posted it. I should have known better.
The lurid headline misleads in many ways. The model wasn’t his, but the product of a team. It was one of several other models almost all of which said the same thing: without mitigation, the UK would see the kinds of problems that Italy, Spain, and France were experiencing, and China had experienced.
The paper proposes that the response might be mitigation or the stronger measures of suppression.
suppression will minimally require a combination of social distancing of the entire population, home isolation of cases, and household quarantine of their family members. This may need to be supplemented by school and university closures.
Was the model correct? We don’t know. But the number of deaths in the UK was growing exponentially—as it had in Italy and Spain before those countries took drastic action. So it was probably the reality as much as the model that prompted action.
The story tells us that:
Ferguson, who resigned from his Government advisory position on Tuesday, predicted that up to 500,000 Britons and 2.2 million in the US would die without measures. Somehow, Sweden - which enacted virtually no measures to mitigate the virus, has a lower per-capita mortality rate than the UK, Italy, Spain, France, Belgium and the Netherlands - all of which enacted lockdown measures.
First off, Sweden did enact measures. Although not as many as were enacted in the UK. But never mind that detail. What conclusion are we to draw from this paragraph? That enacting “virtually no measures” reduced the per-capita mortality rate?
Why don’t we compare Sweden with some nearby countries that are a bit more demographically and culturally similar to see how well Sweden is doing with “virtually no measures.”
Our World In Data provides an interactive graphic here (and you can go there and play with some other countries to test your own theories. The screen capture is from May 7. YMMV
Jeez. Now Sweden doesn’t look so good. But let’s throw the UK back into the mix and think for just a second. Really only one second.
](https://preview.3.basecamp.com/4254923/blobs/12c87898-8fcf-11ea-a410-a0369f740db1/previews/full/Screenshot%202020-05-06%20at%203.22.23%20PM.png?dppx=2)
If a group of countries clusters around 50 deaths per million WITH measures, and a similar country has 5-6 times as many deaths with measures, and if the UK has more deaths than that with measures, what would likely happen in the UK if there were no measures?
a) Half as many, because measures are bad
b) Many more, because measures are good
I could go on. And on. The article is a hot mess of inuendo. What effect does the fact that the lead researcher for a modeling group can’t his dick in his pants have on the validity of his model?
I’m sorry I read the article and damaged my mind.
I’m sorry that I then needed to spend time finding the facts so that I could undo some of the damage.
I’m not sorry that I wrote this. I hope you’re not sorry if you read it.

Defending our minds

Your mind is a delicate device, evolved over nearly 14 billion years.
Spend time reading, listening to, and interacting with that which will make your mind—and you—better.
Avoid reading, listening to, and interacting with that which will make you—and your mind—worse.
Share only things that will make others better.
Avoid mental contamination.

Everything changes your mind

Whenever someone reads something or listens to something or has a conversation with someone, it changes the contents and structure of their mind.
Of course.
Some interactions are neutral. Most interactions change minds for better or worse.
If our minds are made worse, we are made worse.
If our minds are made worse, we need to repair the damage, or our minds will remain worse.
We can keep ourselves from things that make us worse.
We can avoid connecting others with things that make them worse.

What makes us worse

An interaction that delivers false information makes us worse.
An interaction that delivers misleading information makes us worse.
An interaction that adds useless information makes us worse.
An interaction that misplaces importances makes us worse.
An interaction that produces unhelpful emotion makes us worse.
An interaction that distracts us from productive activity makes us worse.
An interaction that creates confusion makes us worse.

What makes us better

An interaction that corrects an error makes us better.
An interaction that helps us represent knowledge more simply makes us better.
An interaction that adds useful knowledge makes us better.
An interaction that helps us organize knowledge makes us better.
An interaction that helps us think more clearly makes us better.
An interaction that helps us focus on what we deem important makes us better.
An interaction that produces helpful emotion makes us better.
An interaction that creates the optimal balance between order and disorder makes us better.
An interaction that builds our cognitive skills—attention, perception, reasoning—makes us better.

What does not make us better makes us worse

Any interaction costs us the time that we spent in that interaction.
If it does not make us enough better to offset that cost, then we are worse.
The harm done by an interaction that will persist until we spend additional time undoing the harm.
If we do not have the necessary skills, we may not be able to undo the harm.

Avoiding harm is better than repairing it

When we carry out an interaction that causes us mental harm, we’ve wasted the time spent in the interaction, we’ve cost ourselves the harm to ourselves, and will cost ourselves the time we will need to spend undoing the harm.
When we expose others to a source of harm, we risk causing them to waste time on the interaction, the harm to themselves, and the time that they must spend undoing the harm.

May 7, 2020

Reality is never wrong

Short form:
Your model of reality can be wrong. But reality can never be wrong.
Listen carefully the next time you hear anyone (especially you) say that something “should not have happened” or “is crazy” or “makes no sense.”
These are just ways of saying that “my model is right and reality is wrong.”
You are batshit crazy if you insist that your model is right and reality is wrong.
Sorry. Nope.
Reality is the standard against which models are judged.
Your model is not the standard against which reality is judged.
H/T to Adam Robinson for that insight.
Long form (because this is important, and I’m trying to retrain myself)
If you believe the world operates in an orderly way according to the laws of physics, say—even if it’s probabilistic rather than deterministic—the world is the way it is because that’s what the laws + initial conditions say. So if the world doesn’t behave the way you expect, it’s your model of reality that’s wrong (does not include all the consequences of physical law) and if you say ‘what’s happening is crazy’ you’re saying, in effect “my model is right, and the idea of laws of physics is wrong.” That’s kind of nuts.
If you believe that the world operates in an orderly way—but that God sometimes jumps in and overrides the rules—the world still can’t be nuts. It’s the way it is because that’s the way God wants it. Either you don’t understand God, or She doesn’t know what She’s doing. Sorry, I’m betting on God. Again, in effect, you’re saying: “My model is right, and God is wrong.” Also kind of nuts.
To anyone who asserts that you can escape by deciding, “your mental model is that the world is nuts,” sorry, but that’s nuts. If “the world is nuts” is your model, does that mean that sensible things make no sense? Or does it mean that anything that happens is allowed by your model? In either case, your model is useless. Choosing a useless model is kind of nuts. Even a wrong model is sometimes right, but a useless model is—well, useless.
Any time that world seems nuts, it’s because your model of the world does not help you understand how what has actually happened should have happened.
But it always should have happened because it did happen.
The error is in your model, not the universe.
The correct response to the world behaving in unexpected ways is: “My model is wrong.”
You should, at a minimum, correct your model by changing your Bayesian priors, and at a maximum throw the model out in favor of one that predicted the world as well as your model (aggregate score, not every particular) and also predicted what actually happened that your model mispredicted.

May 3, 2020

The posting habit

It’s May 3rd in Maine, though May 2nd, near the International Date Line, my notional location as far as 750words.com is concerned. My draft last night was about 350 words long before I edited it, and I’ve got an hour to get to 750.
It’s not quite 6:00 AM where my body’s at, and 10:56 PM in my 750words nominal location, and that gives me an hour to write another 300 words, now.
I’ll wrap up today, writing a little about Alex Guzey and James Clear. I was going to write about Tim Ferris, but I’ll save him for another time. And I’ll do deeper dives into all of them a bit later.
I’m an independent researcher with background in Economics, Mathematics, and Cognitive Science. My biggest intellectual influences are Scott Alexander and Gwern.
Anyone who cites Scott and Gwern is OK in my book.
He’s in my thoughts this morning (and yesterday, and the day before) because he wrote a blog post: Why You Should Start a Blog Right Now that reminded me why I should go back to blogging daily.
He’s also got a collection of Tweet-Sized Insight Porn, and if I’m not careful, I might spend the next hour reading it.
James Clear is the author of Atomic Habits: Tiny Changes, Remarkable Results. I heard him interviewed on Sam Harris Making Sense Podcast. It was not the first time I’ve heard him interviewed, but this one convinced me to put his book on my reading list and to mine it for knowledge.
Here’s a summary of the summary of his book from Four Pillar Freedom’s book summary.
If you’re having trouble changing your habits, the problem isn’t you. The problem is your system. Bad habits repeat themselves not because you don’t want to change, but because you have the wrong system for change.
Among the things that hooked me is his definition of habit as “solutions to problems we have to solve repeatedly,” his observation that habits are context-dependent, and his Four Laws of Behavior Change. Again from Four Pillar Freedom:
Any habit can be broken down into a feedback loop that involves four steps: cue, craving, response, and reward. The Four Laws of Behavior Change are a simple set of rules we can use to build better habits. They are (1) make it obvious, (2) make it attractive, (3) make it easy, and (4) make it satisfying.
Clear gives a further exposition on his blog, here and explains nicely how the “feedback loop” leads to the Laws:
So I finished my 750. And beyond that, I realized that I can improve the “blogging habit” I wrote about in the post Pay the price that I wrote yesterday. That’s to give myself a small psychological reward as I make each change in Grammarly. Cue, craving, response, reward.

Pay the price

Yesterday I published three posts, and another around 1 AM—technically today, but I’ll count it as yesterday. They were the first things I’d published in fifteen days. What changed?
The answer, I think, was facing my fears.
My mind tries to keep me safe by guiding me away from paths that lead to discomfort and keeping me on comfortable paths. Reading is comfortable. Writing is comfortable. Deciding to publish a post is the beginning of a road to discomfort.
The discomfort begins when I decide to start the process of publishing. The post’s not done, of course. It’s never done. I can always do more. So I’m basically giving up.
Then I have to run Grammarly to spell, and grammar check it. No joy in that.
Copy/pasting it into blogger isn’t hard, and not exactly uncomfortable, but there’s no pleasure in it, either.
Then I have to decide whether to tell my friends and family that I’ve published a post. If I tell them, I’m bragging. And they might not like it. Or they might not read it.
The only way to get a post—like this one—done is to accept discomfort as the price for producing something. Some Future Me might appreciate what I’ve created, but I’m the one that has to endure the discomfort.
Fortunately, I’ve made peace with Past and Future Me and no longer resent doing things for his sake.
Accept the unsatisfactoriness of life. I’ve learned that lesson—or ones a lot like it—before. Then I’ve forgotten them. Then I’ve had to rediscover the same lesson again.
What can I do to make this the last time that I have to rediscover it?
On my left arm, I’ve got a tattoo that says, “I’m not dead yet.” Maybe I need one for my right arm? If I did that, what would it say? LIfe is unsatisfactory? Memento more?
Adam Robinson, a guy I discovered recently, quotes Rudyard Kippling on an episode of Tim Ferris podcast. “If you did not get what you want, it’s a sign either that you did not seriously want it or you tried to bargain over the price.”
So maybe what I want on my right arm is “Pay the price.”
Maybe I’ll end up like Leonard Shelby, the guy in Memento) who has lost his ability to make new memories and has the things that he wants to remember tattooed.
What else would I tattoo?

May 2, 2020

Panic, over the hill

We all tell stories about ourselves. I do, anyway. We need them to make sense of ourselves and the world. I do, anyway.
We have more than one story. Faced with a set of circumstances, we tell ourselves the most helpful story. I do, anyway.
We tell ourselves stories that define who we are or that define who we are not. I do, anyway.
And there are stories that we don’t tell about ourselves. And sometimes those stories are the truest one.
Sometimes I’m afraid, and my stories account for that fact. But my stories didn’t account for how often and how completely I’ve been paralyzed by fear. How could I tell that story when in the moment I didn’t feel any fear? Anxiety, yes, sometimes. But not fear. Until I realized that behind every episode of anxiety, indecisiveness, procrastination, distraction is an ocean of fear.

We leave Alameda

Coronavirus was spreading, facilities were closing down, and the kids got together and decided we had better return to safety while we still could travel.
We’d planned to spend another week in Alameda, then two weeks visiting friends in California, returning home via Los Angeles. Instead, we got in the car on the morning of March 15 and headed home.
I have vivid memories of parts of that day: full-color stills and occasional moving images. The first area in the place where we stayed—packing up—cleaning up—taking out the trash. It’s raining. Now it’s drizzling. Now raining again.
We head to Trader Joe’s for supplies and the last-ditch effort to find Bobbi’s lost hearing aid. I check at the desk, then go the Safeway across the street to see if someone might have found it in the parking lot and turned it in. No luck. We’re in no hurry to get out of town, which turns out to be a mistake. Or maybe not, since an exciting story is better than a boring one.
We drive across the bridge and up through Sacramento. We stop for gas. I remember squeezing just $8.00 into the tank because for $8.00, I get a free coffee. I drink it as we eat the Trader Joe’s lunches we bought.
We continue up into the mountains. I see a sign. There’s a police checkpoint ahead. Trucks need to show that they’ve got chains. It makes no sense to me. The temperature is about 40.
And then it’s below 40, headed toward freezing. In my mind’s eye, recalling the story, I can see snow accumulating on the side of the road. We climb higher. The snow starts getting on the roadbed itself. By the time we hit the Monta Vista service area, there’s more than an inch on the ground. We pull in and go to the toilets; There’s a line. The walls are green. I can see the bathroom that I used.
People are chaining up. They say you can’t continue without chains. I know nothing about chains. I asked the gal behind the counter what they cost, “Anywhere from $70 to $300, depending,” she says. I have no idea what chains will cost for my car. I don’t know how I’d get them on.
I just want to get out of there!
In retrospect, I’m in an utter panic. But I don’t realize that until later.
We start to drive back down the mountain. There are no motels until we go back nearly to Sacramento. I book a Best Western for $100.
Bobbi doesn’t want to turn back, and we argue. In retrospect, I’m in full flight mode, in a panic. I ask Bobbi for a Lorazepam to calm me down. It helps. I’m not only calmer but able to realize how terrified I had been.
Terrified about what? It made no real sense. It was nothing more than fear of the unknown. Or of making a mistake.
At the service area, someone had told me there was a place that sold chains further down, so I decided to check in there. When I see it from the road, I’m unable to get to the exit without doing something dangerous, so we go to the next exit, turn around and go back up.
The place is closed. Panic is rising again, but before it takes hold, I see that there’s a gas station next door that sells chains. I go in with a photo of my tire that shows the number. The guy looks in a book to figure out what size chains I need and walks me through the process. He’s about to ring up the sale when he tells me that I probably want an ant spreader. I have no idea what it is, but what the fuck. He knows, and I don’t, so I buy the chains and the spreader. We head back up the mountain.
The Lorazepam keeps me calm, and as we drive, I’m able to see my panic as paradigmatic. This is not an isolated incident. It’s what always happens when I’m faced with consequential unknowns. I freeze. Panic. I realize that this happens a lot.
It takes me more than a month to realize that when that feeling starts to rise, I need to call for help.
But right now, I’m not panicked. I’ve got a plan, and I’m going to follow it.
This time we stop at the Gold Run rest area. There’s plenty of snow on the ground, and the guy had said putting chains on is not that hard. As I’m scratching my head, a guy with an off-the-roader pulls in, and I ask him if he can help me.
He says he’s done this millions of times, and explains the theory and practice, and teaches me how to do it as I kneel and lie on plastic bags trying to avoid getting soaked. I g both of the chains partly on, but I can’t close the deal, so I decide to see if someone at the gas station up the road can put them on for me. This turns to be a wise decision.
We pack up and drive to the gas station, and I wander around before finding a woman and her son in rain gear who teach me two important things that the guy at the rest area didn’t. First: the chains on my car go on the front wheels, not the rear ones. And second, putting chains on a vehicle is easy if you know this one weird trick: you find someone who knows what the fuck they are doing, and you pay them $30.00 to put them on.
Now the chains are on, and we’re ready to go. We get to the top of the ramp to the Interstate and see there’s a long line of cars standing on the ramp. A guy walking up the ramp says his car’s been there for an hour. I ask Google for an alternate route, and it tells me to drive across the road where some other cars are going. We join the convoy.
It’s getting dark now. There are a dozen cars heading overland on unplowed roads filling with snow. There’s a cop car up front, and he’s keeping everyone’s speed moderated. We twist, and we turn and climb and coast and finally get to another highway entrance, past the jam. I ask that state trooper if we can go on, and he says we’re good, and so we go.
It’s dark. Snow is falling and blowing. I’m doing about twenty. Signs say keep it under 35, which is a joke because the few times my speed approaches 35, I feel as though the wheels are going to vibrate off.
We stop at another gas station some miles up the road, and I have to wait in line for twenty minutes to get in and pee. Snow is piling up, and a couple of times, I’m afraid I’ll get stuck getting back on the highway, but I make it.
Driving the highway would be terrifying if I was able to feel any emotion. My hands are frozen to the wheel, my eyes frozen on the road ahead. I can’t see the roadway. But both sides of the roads are demarked by 8-foot poles meant to stick out of snowdrifts and as long as I’m between the poles on the left and the poles on the right I’ve got to be on the road, right? Somewhere on the road.
Traffic moves slowly for the most part, but from time to time, someone with a car built for those conditions will go flying by at forty or more. I keep a steady pace.
All I see in memory is a blend of real scenes of the day. White road. Red taillights. Black poles. And me gripping the wheel.
We go on and on and on and on and on. It’s all the same. And finally, my GPS tells me we’ve made it to Truckee, a small town just beyond the Lake Tahoe region.
We get off the highway and drive into town to get gas and find a place to sleep. But it’s Friday night and snowing, and people who like this kind of shit have taken all the moderately priced rooms. We can get a place for $350, or we can press on toward Reno, another hour driving practically blind. Why not? We head for Reno.
So back on the road. But first, some adventures going around the Donner Pass Road and Truckee Way and into a rec center, and around a traffic circle and finally onto the Interstate once again.
More miles in the dark. And we finally make it Reno. We’re in a Best Western attached to a casino. We circle the building, find a place to unload, then a place to park, and we’re in for the night.

TL;DR

I learned something about myself that I was peripherally aware of but avoided facing. And after I finally faced it—for a moment on that day—it took six weeks, until today, for me to decide that I needed to face it and face it and face it until it no longer governed my behavior.
So I wrote this for you, Future Me. And for anyone else it happens to help.

Pages