The AIs aren’t coming. They're already here. They've been among us longer than we've had the term “Artificial Intelligence” to describe them. They’re not just artificial intelligences, they are artificial superintelligences, smarter than any human being.
What are they? They are governments, militaries, corporations and many other kinds of organizations. All artificial. All intelligent. Many superintelligent. The fact that some of these ASIs have human beings as component parts is a design detail. The ratio of non-human to human components is growing, and as Machine Intelligence extends and replaces human intelligence the ratio will continue to grow. The components of these AIs are connected through sensor, storage, and communication systems that are artificial extensions to human sense organs, memories, and nervous systems. These Artificial Superintelligences arrived as a product of evolution. They continue to evolve, ever more rapidly and ever growing in intelligence.
In the rest of this article, I’ll address these questions: Is it reasonable to talk about human organizations as though they were entities in their own right? I believe the answer is yes. Do they demonstrate behavior that is different from the intelligence of the people in the organization? Also, yes. Can an organization’s intelligence be greater than the intelligence of its human components? Yes. Is an organization’s intelligence natural, or is it artificial? Artificial.
It’s certainly true that an organization is more than and different from just a collection of the people who comprise it. We that people in a group act differently than the people who make up the group them would behave on their own. “Groupthink” is a real phenomenon, well studied. It’s a pejorative term for the collective thinking process of a group, but not all collective thinking is inferior to individual thinking. Human societies are composed of human individuals--some living and some dead--who have contributed to the ideas commonly held within that society. So all culture is a kind of groupthink.
The common set of ideas and ways of dealing with ideas within a group depends on beliefs that the group comes to share and the way the group is organized--in turn based on ideas held within the group. Research tells us that for some groups, some individuals, and some topics an individual not only behaves differently when not in the group setting than in the group setting they perceive and think differently.
A group’s thinking might converge on the thinking of a particular member of the group--a charismatic group leader, for example. But not always. When a group’s members share a common belief that they’ll arrive at better ideas by considering different ideas brought to the group and discussing them to reconcile differences and synthesize new ideas, the group’s thinking might a compromise or a new idea different from the initial ideas of any group member. In some groups, feedback effects and common beliefs cause members to develop ideas (and enact behavior) more extreme than the initial ideas of any group member. Some mobs behave that way. Extremist organizations reward members who become the most extreme.
Organizations are groups of people that have been formed to solve particular problems. They are initially organized based on patterns adopted from other organizations trying to solve similar problems. Over time, they change their organizational structure.
Organizations have their own identity, despite changes in membership. Like people, organizations change, but once an organization is formed, every original member can leave, be replaced by a new member, and the organization will be deemed the same--not just in name, but its character. Law in the United States and other countries recognizes corporate organizations as legal persons, and in many ways, this reflects that reality.
Every organization has many problems to solve and the problems implicitly include the problems of ensuring the organization's survival. A government may exist to solve problems that arise in the territories it governs, but it must also solve the problems of its own survival. A corporation may exist to solve the problems of manufacturing a kind of goods or delivering a kind of service, but it must also solve the problems of its own survival. A political party may exist to solve the problems of electing its members to office or forwarding its philosophy, but it must also solve the problems of its own survival.
Intelligence is the ability to apply knowledge and skills and to solve problems. General intelligence--the kind that humans have--is the ability to apply a broad range of knowledge and a wide variety of skills to a breadth of problems. Intelligence has evolved as a survival strategy in some evolutionary niches.
Using intelligence takes effort and energy and in some niches--the one occupied by amoebae is of little value. If a more intelligent amoeba evolved, it might have nothing useful to do with the apparatus required for its intelligence and be at a disadvantage when competing with dumber, more efficient amoebae.
But once entities in a niche begin to compete on the basis of intelligence--and especially when they compete with one another--an intelligence arms race can break out. Within those niches, the more intelligent entity is more likely to survive.
Our economic, social, and political landscape is full of niches where organizations can form, grow and evolve. Organizations evolve in whatever way makes them best suited to their niche and within some niches, evolutionary pressures have favored organizations that evolve intelligence. Trivially they can do this by attracting more intelligent individuals as members or employees. Once organizations begin to compete on the basis of intelligence, evolutionary pressure will cause them to find ways to drive up their level of intelligence--even beyond the level of their members. How intelligent can an organization get? We haven’t yet seen the limit.
Consider IQ tests. They are a measure of a general intelligence among humans and are correlated with many other measures of human survival and well-being. We might estimate the IQ of an organization by giving it an IQ test. If scoring well was important to an organization, it’s are likely to adapt in ways that increase its score. It would quickly adapt so that the organization could score higher than the people with the highest IQ within the group, even if the members of the organization couldn’t agree on who those people were. If people agreed to answer questions only when they were certain they knew the answer, then the only the more intelligent in the group would try t answer the harder questions, and even if some the answer wrong, most would get it right. Also in many cases, it's easier to determine whether an answer is correct once it’s been chosen and even easier when the reason for the choice has been explained. So people of lesser intelligence could confirm the correct answer of a more intelligent individual. If you gave an organization a series of such tests and they were motivated to improve, you'd see their IQ score go up rapidly as the group self-organized for better performance.
For most IQ tests, speed matters. Over time the group might determine which individuals were better at answering certain kinds of questions and develop a system that would quickly distribute a test’s questions to those individuals best able to answer them. The system would quickly collect, consolidate and verify provisional answers to maximize score and minimize time. If testing was repeated, this sort of iterative self-improvement would increase both speed and success.
There’s little doubt that if Google or the United States Army were tested this way that they would have an IQ score well above that of any individual in the group. Since they are collectively more intelligent than I am, they'd probably develop other ways to improve their collective score. Even FOX news would have a higher score than its highest-scoring group member, but likely below the score of Google--which sets out to hire the highest IQ people it can find--or the Army which probably has a lot of very high IQ people if only by virtue of its size. If organizations competed to get top grades on IQ tests, companies like Google engaged in Machine Intelligence research would like to have tested IQs that went higher and faster by amplifying the intelligence of its employees with Machine Intelligence and better connections.
And ignoring IQ testing, consider this: if we define intelligence as the ability to apply knowledge and skills and to solve problems, it’s certain that organizations have more knowledge and more skill than any individual within the organization. Organizations are formed to solve problems beyond the ability of an individual to solve. Organizations of humans are clearly more intelligent than individual humans.
Given that organizations are intelligent entities, are they examples of natural intelligence or artificial intelligence? As always, it depends on how you define the terms. Everything that exists is either natural or derives from nature. So on that definition, everything is natural. Even computers are natural byproducts of human evolution. You might define natural as existing in its original state or more restrictively as untouched by humans. Rocks, trees, and birds are natural by either definition. A bird’s nest is an artificial place for its young to develop by the less restrictive definition, or natural by the more restrictive. A beaver dam is an artificial device created by a beaver to aid its survival by the less restrictive definition, or artificial by the more restrictive. But a human organization is an artificial assembly of buildings space, communication networks, human beings, and now machine intelligence, artificial by any definition. An organization is an artificial entity and its intelligence is artificial intelligence.
Humans create organizations to solve certain kinds of problems, but once they exist, they can take on a life of their own. They redesign themselves to better solve the problem they are given and may adapt to solve different problems. Every problem has related problems: setting moral and ethical boundaries that constrain the way the organization can act to solve its primary problems. In some cases, an organization my devalue or ignore these moral and ethical problems. It might solve its primary problems in ways that no independent human part of the organization would choose--immorally and unethically.
Corporate organizations create systems with incentives that encourage individuals to do things that individuals would not choose to do on their own. Who at Volkswagen would have decided to lie about their cars' emissions without the organizational incentives that encouraged lying? The engineers, acting independently, would have no incentive. Managers would not have direct engineers to commit fraud.
This happens often. But collectively, Volkswagen and other organizations act in ways are likely to ultimately cause harm to the organization and the people who comprise it. The behavior at Abu Ghraib was the unintended consequence of an organization structure that provided the wrong incentives not simply human maliciousness.
Artificial Super Intelligences are among us. They are competing to become ever more intelligent. Google, Amazon, and Facebook are ASIs competing in partially overlapping niches. They are component parts of still larger organizations, and thus larger artificial intelligences: the Silicon Valley ecosystem is racing toward greater collective intelligence.
The United States needs to become more intelligent in order to compete with China. The United States--and China--could increase their intelligence if by reducing conflict among its parts. It’s not that conflict is bad. Look inside our human minds and we’ll find plenty of subordinate intelligent modules in conflict with one another. And there’s plenty of conflict within Google and Facebook and Amazon. It’s a question of finding the right amount of conflict and the right amount of cooperation. When the conflict in a human exceeds the bounds of what’s reasonable, we diagnose a mental illness. The same could be said of corporations or political entities.
So what does this argument tell us about the coming danger of AIs? I don’t know. I find it both comforting and frightening. It’s comforting to realize that we’ve been living alongside AIs all these years--and that we humans have prospered as have the AIs that we’ve lived with. We’ve got a reasonable history of success.
It’s frightening to realize the enormous powers that artificially intelligent entities already have and to consider the rate at which both their intelligence and power are growing. Corporations, for example, are designed to care more about profit than people. To the extent that they care about people--and they do, both employees and customers--it’s as a means to their end which is their own survival. Profit is just a means to that end.
Similarly, governments care about people as a means to an end--which is their own survival. These AIs need us now to survive. You can’t have a corporation without some people. But if a corporation does most of its business with other corporations, what people are really necessary?
People are becoming a smaller part of these artificial intelligences. We’re still necessary for some high-level tasks--like strategy planning--and for some low-level tasks, like burger flipping. But what happens when those AIs, far more powerful than we are, decide they can replace those humans, too?
It’s a question worth considering now.
And with these intelligences gaining power to consider this: what will happen when a sufficiently powerful AI goes insane.
No comments:
Post a Comment