Jan 5, 2018

Epistemic Learned Helplessness

Today my random walk around the web reminded me of a post by Scott Alexander of SlateStarCodex before he was Scott Alexander, and even before he was Yvain on LessWrong, back when he was Squid314 on LiveJournal.

Update: it's gone! Or at least inaccessible. But the Wayback Machine has it here and I've linked to my the recovery process here 

The post is called "Epistemic learned helplessness" a title that was memorable until I forgot it. Sic transit gloria memoriae. (Which took me a few minutes on Google Translate to render. Sic transit hora mea.)

He starts the post:

A friend in business recently complained about his hiring pool, saying that he couldn't find people with the basic skill of believing arguments. That is, if you have a valid argument for something, then you should accept the conclusion. Even if the conclusion is unpopular, or inconvenient, or you don't like it. He told me a good portion of the point of CfAR was to either find or create people who would believe something after it had been proven to them. 
And I nodded my head, because it sounded reasonable enough, and it wasn't until a few hours later that I thought about it again and went "Wait, no, that would be the worst idea ever."
Why? Because someone who knows a field thoroughly enough can make an argument that is likely to make total sense to any non-expert. To figure out what to believe takes both a lot of knowledge and a great deal of metaknowledge.



And unfortunately, most people don't realize that  "Confirmation Bias, a Feature, Not a Bug" and think that because an argument makes sense means that it's true.

I've been reading a bunch of stuff about Climate Change (Of course, climate changes. Duh!) And the assertions that are made by both the "It's a problem of disastrous proportions!!!!" and the "It's a hoax of incredible proportions!!!" are both flawed.

Here's what I've concluded, after exhaustive and exhausting research. There are some things that we know with high confidence--like the absorption spectrum of atmospheric gases; there are some things that we know with less confidence--like the instrumental records of temperature and sea level (because they must be adjusted in complex, and not always agreed upon ways to account for known sources of inaccuracy, and because of confounding effects.) And there are some things we know with almost zero confidence--like the economic models that predict the impacts of climate models and the climate models on which the economic models are based.

Why do we (or rather I) have no confidence in these models? Because a model's ability to hindcast (predict the past) provides no confidence in its ability to forecast the future; because years of economic models that have correctly hindcasted have failed to correctly forecast; and because the forecasts of past climate models have yet to be tested against the future that they have predicted. We should test the models.
Climate scientists can take an easy and potentially powerful step to build public confidence: re-run the climate models from the first 3 IPCC reports with actual data (from their future): how well did they predict global temperatures?

I don't think I have any ability to know when an argument is correct, but it's easier to see where one is wrong.

Scott's post is worth reading.



No comments:

Post a Comment

Pages