Mar 12, 2016

Computers are starting to learn, and they're going to learn our jobs


The breakthroughs are coming faster and faster. Last night (as I started this post) Google's go-playing computer system, called AlphaGo, beat one of the the world's best go players, Lee Sedol of Korea. Tonight (as I continue it) AlphaGo has won the second match in a five game series. This morning (as I work on it) AlphaGo has won the third.

This is a big, big deal because AlphaGo wasn't programmed to win at go. It was programmed to learn how to win at go.

It's a huge step toward the day when computers will able to take a huge number of jobs that right now only humans can do. Those jobs are not coming back. And we're not going to get good new ones in their place.

When that day comes it will be great, and lousy. Great because we'll all be able live in unheard of abundance.  Lousy because, there will be no paying jobs for some people. And despite the fact that will be abundance enough for all, some people will argue -- on moral grounds -- that people with no jobs deserve nothing; and then they will act -- on pragmatic grounds -- to give those people just enough to keep them from breaking out the well-deserved torches and pitchforks.

Yeah, I know. People have been talking about automation throwing people out of work since -- well, since automation first started throwing people out of work. And the people who have said that have always been wrong. And I've told them that. New jobs, even better jobs, have always been created. Read about the Luddite fallacy. It's called a fallacy because it's false.

But this time I'm saying something different. There's an exponential growth pattern that's hiding in the data. Funny thing about exponential patterns: they look like an innocent, slightly upward curving lines. Then FOOM! They take off..

Two things will be different. First, old jobs will vanish far, far, faster than before. In the past, new jobs could be created nearly as fast as old jobs were destroyed. In the past.

Second, most people won't be able to do the new, good jobs; they will require more IQ than most people have.

Let me break it down.

Old jobs are destroyed at the rate at which they can be automated. If you dramatically increase that rate, then old jobs will disappear faster. The rate has been increasing, and will continue to increase. Alphago and its ilk are part of the reason I expect the increase to increase.

AlphaGo is a computer system that's programmed to learn how to do things. It's not an artificial intelligence. We're far away from that. It's a machine learning system. At the highest level machine learning systems have to be programmed to learn. Below that level they learns. How? By guided learning. By seeing examples. By experimenting. By trail and error.

The learning algorithms, are written by top programmers. At the peak of the machine learning food chain, where Google's AlphaGo and DeepMind feed, the programmers are PhDs from top universities and their less decorated peers. Once upon a time I was somewhere in the top 1% by IQ and SAT score. Maybe in the top 0.1%. I was a solid programmer in my day. But I don't think that even in my prime that I would be in that league.

So there's a new kind of job -- programming systems like AlphaGo -- that I would not have been smart enough to do. And if I wasn't smart enough then most of the people in the world wouldn't have been smart enough, either.

Now consider the jobs that the rest of us can do, and here comes the problem. Once a "learning computer" is capable of learning how to do a certain class of jobs, then automating that job out of existence is easy. You just need enough computers and enough training resources available.

Enough computer resources? Plenty of those. Growing at an incredible rate. And enough training resources? That's us.

Take driving a car. Every time a Tesla customer takes his Model S out for a spin, whether or not they've engaged Tesla's "autopilot," they're teaching Tesla's learning computer system how to drive better. In a particular driving situation, the Tesla computer might decide not to change lanes. But if, in similar situations, most drivers change lanes, Tesla learning computer is going to modify its decision making so that it will decide, in similar situations, not to change lanes.

Or understanding human speech. Every time we use Apple's Siri, or Google's Now, or Amazon's Alexa, we're teaching it to improve its understanding of human speech. The way we organize our online photo collections helps teach them how to recognize features in images. And so on.

As we use online services to omake our lives better we're teaching computers what they need to do so that they can do jobs that only people can do today.

Once we've got cars that can really self-drive, what happens to truck drivers? To taxi drivers? To others who drive for a living. What happens to people who build cars when anyone can call for a car (not an Uber car with an expensive driver attached, just a car) and have it take you where you want to go? You don't need that second family car that you use only occasionally. And you don't even need that first one, either.

And what happens when companies that want a computer to do something don't have to hire high-priced programmers to program it? What happens when they can show a learning system what they want done, and the system figures out what the program has to be?

What happens when factory robots don't have to be programmed -- at vast expense -- but can just be shown what to do? Or given some general guidelines and the time to figure it out themselves -- in parallel. Here's a bunch of Google's robotic arms solving problems of "hand-eye coordination," by teaching one other.

What's it cost to buy a robotic arm that a computer could learn to control? Here, and here are recent kickstarters for light duty robotic arms. Here is another one: for $350.00 you can get a six axis robot arm that can move, and pick up things, and it can see -- and you don't have to program it. You can guide its movement, or move your hand and show it what you want it do do. Good buy simple manual jobs.

That's today. Robot arms keep getting cheaper and better and smarter as their underlying components keep getting cheaper and better and smarter. It's relatively easy to scale them up to carry more weight. And if a robot arm knows how to do something, then thanks to robot-information-interchange projects like RoboEarth, once one robot arm can do something, then every suitably capable robot arm can do that same thing.

Human learning capacity is limited. A human worker might learn how to do fifty or a hundred or even a thousand discrete tasks. It might generalize its knowledge and be able to figure out a bunch more. But any robot, connected to the Internet, can do as many tasks as  -- How big is the Internet anyway? Yeah, that many tasks.

Will there be good jobs in the future? You bet, but mainly for people who are at the top of any talent category. Being at the top of the smartness category is just very important special case. People who have the right intellectual skills but not at the very top will have jobs for a while as they use the components created by their even smarter brethren to build systems that put other people out of work.

People have been putting other people out of work for a long time. It's what we call progress. Remember the Luddite Fallacy? Why is this any different?

In this post, Scott Alexander offers a useful metaphor: "an employment waterline, gradually rising through higher and higher levels of competence." If your competence is above the waterline, you survive. If you go below then you drown -- unless you are supported by others. The water is rising. And fast.

The Flynn Effect tells us that people actually are getting smarter. But they are getting smarter slowly, and the water is rising fast.

In our hunter-gatherer past, almost everyone was above the waterline.

As we moved to an agricultural society a very few couldn't learn the rules of farming, and started to fall went below the slowly rising (talking thousands of years) waterline. They were few enough that if there was enough food to go around, other people were willing to help them survive.

As agricultural societies became industrialized, the waterline continued to rise above a few more. And now we're talking changes taking hundreds of years, not thousands.

Now the West is considered post-industrial. We're a knowledge and services economy. You need a increasing amounts of training to make your way. And I'm not talking about a college degree. The amount of stuff you have to know to be a competent plumber or welder today is growing. Yes, thank goodness, we've got the Internet to help. But the water is rising.

I see many bad paths ahead for most of us. Not for my kids and my grandkids, thank goodness. All of them are wicked smart, and it will be a relatively long time before the water rises high enough to threaten them. But I do see bad paths ahead for lots of perfectly good human beings.

I see a couple of good paths. One is to change our ideas about how the economy works and abandon the old ideas rooted in scarcity, and adapt them to a world of abundance.

Another might be even better, but while it sounds incredibly exciting it's also incredibly scary. A new technology called CRSPR offers a low-cost and highly reliable way to edit the human genome in favor of intelligence. Steven Hsu a brilliant physicist and writer explains how an IQ of 1000 might become the new normal in his essay "Superintelligent Humans Are Coming."

Oh brave new world, that has such creatures in it.

No comments:

Post a Comment

Pages