when machines outsmart humans

6

Click here to load reader

Upload: nick-bostrom

Post on 01-Nov-2016

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: When machines outsmart humans

Futures 35 (2003) 759–764www.elsevier.com/locate/futures

When machines outsmart humans

Nick Bostrom∗,1

Yale University, Department of Philosophy, PO Box 208306; New Haven, CT 06520, USA

The annals of artificial intelligence are littered with broken promises. Half a cen-tury after the first electric computer, we still have nothing that even resembles anintelligent machine, if by ‘intelligent’ we mean possessing the kind of general-pur-pose smartness that we humans pride ourselves on. Maybe we will never manageto build real artificial intelligence. The problem could be too difficult for humanbrains ever to solve. Those who find the prospect of machines surpassing us ingeneral intellectual abilities threatening may even hope that is the case.

However, neither the fact that machine intelligence would be scary nor the factthat some past predictions were wrong is a good ground for concluding that artificialintelligence will never be created. Indeed, to assume that artificial intelligence isimpossible or will take thousands of years to develop seems at least as unwarrantedas to make the opposite assumption. At a minimum, we must acknowledge that anyscenario about what the world will be like in 2050 that simply postulates the absenceof human-level artificial intelligence is making a big assumption that could well turnout to be false.

It is therefore important to consider the alternative possibility, that intelligentmachines will be built within fifty years. In the past year or two, there have beenseveral books and articles published by leading researchers in artificial intelligenceand robotics that argue for precisely that projection (see, e.g., Moravec[1] andKurzweil[2]). This essay will first outline some of the reasons for this, and thendiscuss some of the consequences of human-level artificial intelligence.

We can get a grasp of the issue by considering the three things that are neededfor an effective artificial intelligence. These are: hardware, software, andinput/output mechanisms.

The requisite I/O technology already exists. We have video cameras, speakers,

∗ Fax: +1-203-432-7950.E-mail address: [email protected] (N. Bostrom).

1 Present address: Oxford University, Faculty of Philosophy, 10 Merton Street, Oxford OX1 4JJ, UK.

0016-3287/03/$ - see front matter 2003 Elsevier Science Ltd. All rights reserved.doi:10.1016/S0016-3287(03)00026-0

Page 2: When machines outsmart humans

760 N. Bostrom / Futures 35 (2003) 759–764

robotic arms, etc. that provide a rich variety of ways for a computer to interact withits environment. So this part is trivial.

The hardware problem is more challenging. Speed rather than memory seems tobe the limiting factor. We can make a guess at the computer hardware that will beneeded by estimating the processing power of a human brain. We get somewhatdifferent figures depending on what method we use and what degree of optimizationwe assume, but typical estimates range from 100 million MIPS to 100 billion MIPS(Bostrom[3]) (1 MIPS = 1 Million Instructions Per Second). A high-range PC todayhas about one thousand MIPS. The most powerful supercomputer to date performsat about 10 million MIPS. This means that we will soon be within striking distanceof meeting the hardware requirements for human-level artificial intelligence. In retro-spect, it is easy to see why the early artificial intelligence efforts in the sixties andseventies could not possibly have succeeded — the hardware available then waspitifully inadequate. It is no wonder that human-level intelligence was not attainedusing less-than-cockroach level of processing power.

Turning our gaze forward, we can predict with a rather high degree of confidencethat hardware matching that of the human brain will be available in the foreseeablefuture. IBM is currently working on a next-generation supercomputer, Blue Gene,which will perform over 1 billion MIPS. This computer is expected to be readyaround 2005. We can extrapolate beyond this date using Moore’s Law, whichdescribes the historical growth rate of computer speed. (Strictly speaking, Moore’sLaw as originally formulated was about the density of transistors on a computerchip, but this has been closely correlated with processing power.) For the past halfcentury, computing power has doubled every eighteen months to two years (see Fig.1). Moore’s Law is really not a law at all, but merely an observed regularity. Inprinciple, it could stop holding true at any time. Nevertheless, the trend it depictshas been going strong for over fifty years and it has survived several transitions inthe underlying technology (from relays to vacuum tubes, to transistors, to integratedcircuits, to Very Large Integrated Circuits (VLSI)). Chip manufacturers rely on itwhen they plan their forthcoming product lines. It is therefore reasonable to supposethat it may continue to hold for some time. Using a conservative doubling time oftwo years, Moore’s law predicts that the upper-end estimate of the human brain’sprocessing power will be reached before 2019. Since this represents the performanceof the best supercomputer in the world, one may add a few years to account for thedelay that may occur before that level of computing power becomes available fordoing experimental work in artificial intelligence. The exact numbers don’ t mattermuch here. The point is that human-level computing power has not been reachedyet, but almost certainly will be attained well before 2050.

This leaves the software problem. It is harder to analyze in a rigorous way howlong it will take to solve that problem. (Of course, this holds equally for those whofeel confident that artificial intelligence will remain unobtainable for an extremelylong time — in the absence of evidence, we should not rule out either alternative.)Here we will approach the issue by outlining two approaches to creating the software,and presenting some general plausibility arguments for why they could work.

We know that the software problem can be solved in principle. After all, humans

Page 3: When machines outsmart humans

761N. Bostrom / Futures 35 (2003) 759–764

Fig. 1. The exponential growth in computing power. (From Moravec; courtesy of the World Transhu-manist Association) [6].

have achieved human-level intelligence, so it is evidently possible. One way to buildthe requisite software is to figure out how the human brain works and copy natu-re’s solution.

It is only relatively recently that we have begun to understand the computationalmechanisms of biological brains. Computational neuroscience is only about fifteenyears old as an active research discipline. In this short time, substantial progress hasbeen made. We are beginning to understand early sensory processing. There arereasonably good computational models of primary visual cortex, and we are workingour way up to the higher stages of visual cognition. We are uncovering what thebasic learning algorithms are that govern how the strengths of synapses are modifiedby experience. The general architecture of our neuronal networks is being mappedout as we learn more about the interconnectivity between neurons and how differentcortical areas project onto to one another. While we are still far from understandinghigher-level thinking, we are beginning to figure out how the individual componentswork and how they are hooked up.

Assuming continuing rapid progress in neuroscience, we can envision learningenough about the lower-level processes and the overall architecture to begin to

Page 4: When machines outsmart humans

762 N. Bostrom / Futures 35 (2003) 759–764

implement the same paradigms in computer simulations. Today, such simulationsare limited to relatively small assemblies of neurones. There is a silicon retina anda silicon cochlea that do the same things as their biological counterparts. Simulatinga whole brain will, of course, require enormous computing power, but as we saw,that capacity will be available within a couple of decades.

The product of this biology-inspired method will not be an explicitly coded matureartificial intelligence. (That is what the so-called classical school of artificial intelli-gence unsuccessfully tried to do.) Rather, it will be system that has the same abilityas a toddler to learn from experience and to be educated. The system will need tobe taught in order to attain the abilities of adult humans. But there is no reason whythe computational algorithms that our biological brains use would not work equallywell when implemented in silicon hardware.

Another, more “science-fiction-like” approach has been suggested by some nano-technology researchers (e.g., Merkle [4]). Molecular nanotechnology is the antici-pated future ability to manufacture a wide range of macroscopic structures (includingnew materials, computers, and other complex gadgetry) to atomic precision. Nano-technology will give us unprecedented control over the structure of matter. Oneapplication that has been proposed is to use nano-machines to disassemble a frozenor vitrified human brain, registering the position of every neuron and synapse andother relevant parameters. This could be viewed as the cerebral analogue to thehuman genome project. With a sufficiently detailed map of a particular human brain,and an understanding of how the various types of neurons behave, one could emulatethe scanned brain on a computer by running a fine-grained simulation of its neuralnetwork. This method has the advantage that it would not require any insight intohigher-level human cognition. It’s a purely bottom–up process.

These are two strategies for building the software for a human-level artificial intel-ligence that we can envision today. There may be other ways that we have not yetthought of that will get us there faster. Although it is impossible to make rigorouspredictions regarding the time-scale of these developments, it seems reasonable totake seriously the possibility that all the prerequisites for intelligent machines —hardware, input/output mechanisms, and software — will be attained within fiftyyears.

In thinking about the world in the mid twenty-first century, we should thereforeconsider the ramifications of human-level artificial intelligence. Four immediateimplications are:

Artificial minds can be easily copied. An artificial intelligence is based onsoftware, and it can therefore be copied as easily as any other computer program.Apart from hardware requirements, the marginal cost of creating an additional arti-ficial intelligence after you have built the first one is close to zero. Artificial mindscould therefore quickly come to exist in great numbers, amplifying the impact ofthe initial breakthrough.

Human-level artificial intelligence leads quickly to greater-than-human-level arti-ficial intelligence. There is a temptation to stop the analysis at the point wherehuman-level machine intelligence appears, since that by itself is quite a dramaticdevelopment. But doing so is to miss an essential point that makes artificial intelli-

Page 5: When machines outsmart humans

763N. Bostrom / Futures 35 (2003) 759–764

gence a truly revolutionary prospect, namely, that it can be expected to lead to thecreation of machines with intellectual abilities that vastly surpass those of any human.We can predict with great confidence that this second step will follow, although thetime-scale is again somewhat uncertain. If Moore’s law continues to hold in this era,the speed of artificial intelligences will double at least every two years. Within four-teen years after human-level artificial intelligence is reached, there could be machinesthat think more than a hundred times more rapidly than humans do. In reality, pro-gress could be even more rapid than that, because there would likely be parallelimprovements in the efficiency of the software that these machines use. The intervalduring which the machines and humans are roughly matched will likely be brief.Shortly thereafter, humans will be unable to compete intellectually with artificialminds.

Technological progress in other fields will be accelerated by the arrival of arti-ficial intelligence. Artificial intelligence is a true general-purpose technology. Itenables applications in a very wide range of other fields. In particular, scientific andtechnological research (as well as philosophical thinking) will be done more effec-tively when conducted by machines that are cleverer than humans. One can thereforeexpect that overall technological progress will be rapid.

Machine intelligences may devote their abilities to designing the next generationof machine intelligence. This next generation will be even smarter and might be ableto design their successors in even shorter time. Some authors have speculated thatthis positive feedback loop will lead to a “singularity” — a point where technologicalprogress becomes so rapid that genuine superintelligence, with abilities unfathomableto mere humans, is attained within a short time span (Vinge [5]). However, it mayturn out that there are diminishing returns in artificial intelligence research whensome point is reached. Maybe once the low-hanging fruits have been picked, it getsharder and harder to make further improvement. There seems to be no clear way ofpredicting which way it will go.

Unlike other technologies, artificial intelligences are not merely tools. They arepotentially independent agents. It would be a mistake to conceptualize machine intel-ligence as a mere tool. Although it may be possible to build special-purpose artificialintelligence that could only think about some restricted set of problems, we areconsidering here a scenario in which machines with general-purpose intelligence arecreated. Such machines would be capable of independent initiative and of makingtheir own plans. Such artificial intellects are perhaps more appropriately viewed aspersons than machines. In economics lingo, they might come to be classified not ascapital but as labor. If we can control the motivations of the artificial intellects thatwe design, they could come to constitute a class of highly capable “slaves” (althoughthat term might be misleading if the machines don’ t want to do anything other thanserve the people who built or commissioned them). The ethical and political debatessurrounding these issues will likely become intense as the prospect of artificial intelli-gence draws closer.

Two overarching conclusions can be drawn. The first is that there is currently nowarrant for dismissing the possibility that machines with greater-than-human intelli-gence will be built within fifty years. On the contrary, we should recognize this as

Page 6: When machines outsmart humans

764 N. Bostrom / Futures 35 (2003) 759–764

a possibility that merits serious attention. The second conclusion is that the creationof such artificial intellects will have wide-ranging consequences for almost all thesocial, political, economic, commercial, technological, scientific, and environmentalissues that humanity will confront in this century.

Acknowledgements

I’d like to thank all those who have commented on earlier versions of this paper.The very helpful suggestions by Hal Finney, Robin Hanson, Carl Feynman, AndersSandberg, and Peter McCluskey were especially appreciated.

References

[1] H. Moravec, Robot: Mere Machine to Transcendent Mind, Oxford University Press, New York, 1999.[2] R. Kurzweil, The Age of Spiritual Machines: When computers exceed human intelligence, Viking,

New York, 1999.[3] N. Bostrom, How Long Before Superintelligence?, International Journal of Futures Studies 2 (1998)

Preprint at www.nickbostrom.com/superintelligence.html.[4] R. Merkle, The Molecular Repair of the Brain, Cryonics 15 (1 & 2) (1994).[5] V. Vinge, The Coming Technological Singularity, Whole Earth Review Winter (1993).[6] H. Moravec, When will computer hardware match the human brain?, Journal of Evolution and Tech-

nology 1 (1998) www.jetpress.org.