Showing posts with label Cognitive bias. Show all posts
Showing posts with label Cognitive bias. Show all posts

Dec 7, 2019

Why do kids misbehave--and why do we misbehave, too?

Haha, it’s a trick question. Kids don’t misbehave. And neither do we adults.
They can’t. We can’t. Kids and adults can only do what’s most rewarding.
What’s wrong with doing what’s most rewarding?

We do what our systems let us do

If we’re trying to do fucked up things, it’s because our reward systems are fucked up. If we do it in a fucked up way, it’s because our operating systems are fucked up.
Are we misbehaving?
Is a computer misbehaving if it’s programmed wrong?

Initial programming

Yeah, I know. You’re not a computer. Neither am I. I’ll grant that we develop some agency as we grow to adulthood.
But consider how we develop.
We’re initially programmed and conditioned by biology. Can’t argue with that, can you? And biology has fitted us out with a ton of cognitive errors and biases. We know that.
We’re later educated and conditioned by our parents and the rest of society. That goes on for years before there’s much room for personal agency.
Assuming a healthy environment, we’re taught mostly good stuff. But our parents and the environment are not perfect, and their teaching methods are not perfect. So some of what we learn is wrong—either because they taught us wrong or they tried to teach us right, but we got it wrong.

Self-programming

Eventually, we modify our own code.
Life presents us with problems that we haven’t been taught to solve — no fault to our parents, or society. Our environment can’t teach us everything. We have to use the tools that we have to generate new knowledge and add to our own code. We might discover errors in what we’ve been taught and have to correct the errors.
We use new knowledge to modify our own code.
How do we generate that new knowledge?
We can only use the tools we have. We guess at solutions. We use the knowledge (or misunderstandings) we’ve been given and try to figure out theories from the raw data of experience. We might seek out teachers and try to learn what they might teach us.
With our flawed biology and our imperfect prior knowledge, some of what we guess will be wrong, and some of what we conclude will be wrong. Sometimes we’ll choose bad teachers who will introduce new errors.
We do the best we can.
But we’ll still have cognitive errors and data errors, including errors in our reward systems

How do we fix the errors that remain?

We can’t fix errors unless:
  1. We’ve learned how to detect errors
  2. We acquired a toolbox filled with ways to analyze and fix errors
  3. We find that detecting them, finding them, and fixing them is rewarding

We do what’s rewarding, no matter what

Whether we fix your cognitive errors or not, we’ll still do what’s most rewarding.
If we don’t fix our errors, then what’s most rewarding may be fucked up.
Too bad.

Following my own rewards

Right now. I’m writing this because it’s the most rewarding thing that I can do at this moment. That’s the way my reward system happens to be wired.
I might stop writing before I post this.
And I’ve done that, lots of times, because my reward system was fucked up.
But I’ve been hard at work detecting and fixing errors in my reward system—and MyOS
I will only stop writing if this is no longer rewarding enough for me to continue—or if something else becomes more rewarding.
Otherwise, I have no choice. I’m going to finish this and post it.
And if I don’t, I know what to do. I’m going to find and fix the cognitive errors that stand in the way of posting it.
And I will do that because finding and fixing those errors is also rewarding.
QED
Edit: see! I did.

Jul 10, 2018

Russell conjugation and Eric Weinstein

Daniel turned me on to Eric Weinstein and I’ve been mainlining his ideas for the past dozen hours. The guy is brilliant. See note at bottom of the post.
Discursions out of the way, here’s what I came here to write about. It’s from Eric Weinstein’s answer to the Edge question for 2017 “What scientific term or concept ought to be more widely known.”
His answer is “Russell Conjugation.” What? I never heard of it, either. But I’d seen the concept in passing, and Wikipedia has heard of it too. Of course.
The example I’ve come across:
I am firm, You are obstinate, He is a pig-headed fool.
Here’s a sentence from his article with so much packed in it that I could write a long essay about every phrase. Assuming that I can get myself to write anything at all.
In an era in which anyone can publish anything, the quest to control information has largely been lost by institutions, with a race on to weaponize empathy by understanding its basis in linguistics and tweaking the social media algorithms which now present our world to us accordingly. As the theory goes, it is not that we don’t have our own opinions so much as that we have too many contradictory ones, and it is generally our emotional state alone which determines on which ones we will predicate action or inaction.
What leaps out is the idea that people are working to weaponize empathy. And that’s because we don't have our own opinions so much as that we have too many contradictory ones.
That’s because of a bug in the human mental machinery. We think that we take in facts and then evaluate them. But research tells us otherwise. There are a couple of mechanisms at work. One is demonstrated in Daniel Gilbert’s work, which I wrote about here. His research concludes that we start by believing what we read (or hear) before we analytically decide that something is wrong. His paper is “You can’t not believe everything you read.pdf).”
But if the idea gets activated and the annotation “this is a false idea” is not activated, then we will act as though we believe what have decided we don’t believe.
To compound the problem, we can be convinced that things that are false are true if we’re exposed to powerful enough arguments. In this post I wrote about “Epistemic Learned Helplessness” a post in which Scott Alexander argues that we’re not necessarily right to accept even the best rational arguments because anyone who is not expert in an area can be convinced by someone who is an expert. Convinced, in this case, is a proxy for “being overwhelmed by verifiable facts that support an argument that is, in fact, flawed, but you don’t know enough to see the flaws.”
So taking these together: every conclusion we’re exposed to is stored in our minds as true along with its supporting arguments, also labeled true. even if our System 2 has concluded (as a result of other arguments and facts labeled “true” whether or not they are) that they are not true. “Weaponizing empathy” exploits yet another a bug in our cognitive apparatus that causes us to override the System 2 evaluation (“this is wrong”) with the emotional overlay “this feels right.”
There’s more to his Edge answer, and I’ll maybe write more later. And much more to the whole idea of gaming the mental systems of others.
Also for future reference: In looking for a link to his homepage or something, I found a brilliant idea on international migration summarized on his site and with a paper published in full in full here. In case I die or ADD strikes me before I write my review of the article, those are the links.

Pages