The AI values alignment problem is not new. We've had AIs for millennia. We just have not called them AIs. We've called them tribes. And nations. And corporations. And society.
But they all fit the definitions of artificial intelligence--and artificial super-intelligence. They are artificial, more intelligent than individual humans, and self-improving. And they often have values different than the values of their creators or component humans. So we've had AI values alignment problems for as long as we've had these sorts of AIs.
AI is entering a new phase. Machine Intelligence is rapidly becoming the dominant part of Artificial Intelligence. And the evolution of AI is accelerating as never before. That may make the values alignment problem harder, but not necessarily different.
Looking at human history as a long series of conflicts among competing AI and a constant battle for values alignment may provide some insight and suggest some new solutions. I hope so. Otherwise I've spent a lot of time thinking about this for nothing.
The longer argument (which I may expand into a series of posts amplifying some of these points)
- AIs already exist. They consist of groups of humans, connected by technology. Governments, corporations, political parties are examples of such AIs. They are intelligent and artificial.
- Many of today's AIs are more intelligent (as measured by range and speed of problem-solving abilities and creativity--or even IQ tests) than almost all humans. They qualify as artificial superintelligences.
- Although these AIs are created, and partially controlled by humans, they have their own objectives and act semi-autonomously:
- Some actions are under direct control (or close supervision) of humans of varying levels of (natural) intelligence
- Some actions are controlled by automated systems of varying levels of artificial intelligence with varying levels of supervision.
- Over time, AIs have become increasingly autonomous. That is: fewer of their actions are carried out under direct, thoughtful human control and more are carried out by automatic systems, including humans who are mindlessly following procedures.
- AIs are not monolithic. They are composed of multiple autonomous and semi-autonomous intelligences--some of which are humans some of which are identifiably separate AIs, and some of which are shifting coalitions of intelligences
- When AIs are created, they are given explicit objectives by their creators. AIs are able to refine their objectives and create sub-objectives. In some cases, they modify their objectives so completely that their current objectives are opposed to their original objectives.
- AIs also have implicit objectives. Implicit objectives include:
- Increasing resources and power
- Self-modification for greater efficiency
- Optimal assignment of resources to objectives
- Avoiding destructive conflicts
- Adapting to a changing environment (including other AIs)
- Controlling other AIs and avoiding the control of other AIs
- AIs must decide what resources (including subordinate AIs) to apply to each objective.
- An AI can survive even if it devotes no resources to its explicit objectives (though probably it has to devote resources to appearing to move toward those objectives). It cannot survive if it does not apply resources to its implicit objectives.
- AIs self-improve today by acquiring computer and communication systems, connecting their human intelligence to those systems, and through those systems connecting them to other intelligences, natural and artificial.
- Some AIs are developing computer systems that can replace all human components. They will do this to the extent that this forwards their objectives and without necessary regard for human well-being.
- Most AIs are under constant attack:
- From AIs competing for resources
- From AIs seeking to control, or even absorb them
- From AIs seeking to escape control
- The societies of nation states and human civilization as a whole are AIs attempting to survive while under constant attack. Human history, viewed through this lens, is a long struggle for AI values alignment.
- This view may suggest new ways to look at the alignment problems created by the rapid increase in Machine Intelligence.