Oct 9, 2016

Sam Harris, on the AI threat

As promised (or threatened) in this post, here's my next article on Sam Harris.

Sam recently gave a TED talk on AI. It's worth watching, before or after reading this.

He argues, in a clear and non-technical way, why human-level and super-human-level computer intelligence is almost certain--if we don't destroy ourselves first.

Then he explains why it might well be a threat when it comes.

Finally he makes the following, important argument: people who understand the problem--even him--under-react emotionally to it.

Here's his nice analogy. Most people knowledgeable about AI agree that we will "eventually" produce non-human intelligences that are smarter than humans. We've already done that in some narrow domains.  As reported in Wikipedia, most expert estimates of when that will happen range from ten years to over a century.

Now imagine that we got a message that says " Hello! We are intelligent creatures from another galaxy. We will be landing on earth in large numbers in ten to one hundred years. And once we arrive we will <message garbled>"

The message is garbled because we don't know. We can't predict what other humans will choose to do, so how could be possibly predict what an equally intelligent non-human might decide?

So how does their arrival in--split the difference--fifty-five years feel to you? To me, it's an eye-blink, because I've lived more than that long, and boy has it passed quickly!

To make matters worse, if we can't predict non-humans of equal intelligence, how could we possibly predict the behavior of creatures that are smarter than we are? And creatures that would be continuing to get smarter far faster than we can. Computer intelligence can evolve at the speed of technology, and we can only evolve at roughly the speed of biology. It took four billion years of carbon-based evolution to get from the first carbon-based critter to human intelligence.

Starting with the computer it will take--at most--a couple of hundred years to catch up to our four billion year head-start. And things don't stop there.

And then what happens? No one has any idea. Isn't that worrisome?

Our only hope to compete--if you want to call it a hope--is that we make ourselves smarter by genetic engineering of our children, and integrating their carbon-based brains with silicon-based computer intelligence.

But then what will they be? Still humans? I would claim that they will be a new kind of human, and the prospect is awe-inspiring.

You may think it's scary awe-inspiring, aka awful. Or you may think it's kind of wonderful.

Whatever the case, I think it's something to think seriously about.

And not to make less of the issue, but only to put it in perspective--it's way more important than who's on the Supreme Court.


No comments:

Post a Comment

Pages