when humans outsmart themselves

5
Futures 35 (2003) 787–791 www.elsevier.com/locate/futures When humans outsmart themselves Rakesh Kapoor Alternative Futures, B-177 East of Kailash, New Delhi 110 065, India Nick Bostrom reminded me of a well-known Panchatantra story composed 1600 years ago. Three young learned Brahmins set out for a foreign land to seek riches for the knowledge and the sciences they have mastered. A fourth friend, named Subuddhi (one with good sense) who knows no powerful science, although he has ample common sense, also accompanies them. While passing through the forest, they come across the bones of a dead lion. The three learned Brahmins see this as a great opportunity to test their learning. The first one, a master of physiology, puts the bones back together into a skeleton. The second learned Brahmin uses skilful incantations to bring back the skin, flesh and the blood. The third one knows how to breathe life back into the animal, and begins with the task. Alarmed, the layman Subudhhi says: “But the lion will kill us all if you put life into it.” “Stand aside, you fool,” said the third learned Brahmin. ”I am not going to let my science be fruitless.” ”Wait a minute, then, while I climb a tree and watch the power of your knowledge from afar,” Subudhhi said, while his three learned friends made fun of him. Tel.: +91-11-2684-7668; fax: +91-11-2683-6838. E-mail address: [email protected] (R. Kapoor). 0016-3287/03/$ - see front matter 2003 Elsevier Science Ltd. All rights reserved. doi:10.1016/S0016-3287(03)00030-2

Upload: rakesh-kapoor

Post on 01-Nov-2016

213 views

Category:

Documents


2 download

TRANSCRIPT

Futures 35 (2003) 787–791www.elsevier.com/locate/futures

When humans outsmart themselves

Rakesh Kapoor ∗

Alternative Futures, B-177 East of Kailash, New Delhi 110 065, India

Nick Bostrom reminded me of a well-known Panchatantra story composed 1600years ago.

Three young learned Brahmins set out for a foreign land to seek riches for theknowledge and the sciences they have mastered. A fourth friend, named Subuddhi(one with good sense) who knows no powerful science, although he has amplecommon sense, also accompanies them. While passing through the forest, theycome across the bones of a dead lion. The three learned Brahmins see this as agreat opportunity to test their learning. The first one, a master of physiology, putsthe bones back together into a skeleton. The second learned Brahmin uses skilfulincantations to bring back the skin, flesh and the blood. The third one knows howto breathe life back into the animal, and begins with the task.

Alarmed, the layman Subudhhi says: “But the lion will kill us all if you put lifeinto it.”

“Stand aside, you fool,” said the third learned Brahmin. ”I am not going to letmy science be fruitless.”

”Wait a minute, then, while I climb a tree and watch the power of your knowledgefrom afar,” Subudhhi said, while his three learned friends made fun of him.

∗ Tel.: +91-11-2684-7668; fax: +91-11-2683-6838.E-mail address: [email protected] (R. Kapoor).

0016-3287/03/$ - see front matter 2003 Elsevier Science Ltd. All rights reserved.doi:10.1016/S0016-3287(03)00030-2

788 R. Kapoor / Futures 35 (2003) 787–791

The skilful Brahmin proceeded to revive, with the power of his knowledge, thedead lion, who promptly devoured the three learned Brahmins!

This story reminds us of two simple yet profound truths that we have realised inmodern times, after the horrific wars, violence, and industrial accidents of the twenti-eth century. One, science, applied carelessly, can lead to unmitigated human disaster(Nuclear weapons, Bhopal, Chernobyl etc.). Two, Western science does not acceptany limits to its aggressive quest. It is antithetical to the very notion of limits —any ‘limit’ is only another frontier to be explored, conquered and another mysteryto be prised open.

We are thankful to Bostrom for having laid out so starkly the possibilities, in thenear future, for science to conquer the next frontier. Artificial intelligence, or human-level machine intelligence. There can be no dispute1 with both his propositions:machines with greater-than-human intelligence may well be built in the next 50 years,and; the creation of such artificial intelligent beings will have wide-ranging conse-quences for all aspects of society, science, technology, and the environment.

The likelihood of creating AI within the next 50 years, and when it happens, itsdeep impacts on science and society, are both assertions that will be accepted bymost futurists.

What, then, is the issue for debate? The question(s) that Bostrom does not poseexplicitly, although we may presume his stand as a co-founder of the World Transhu-manist Association, is: Is the development of beyond-human-level machine intelli-gence desirable? Is it the right direction to take? Can the process be guided, influ-enced, or controlled in any way? Should we be hastening this process or trying tohinder it? Why do we need AI that will go beyond human-level intelligence? Andwhat will be the nature of social, political, and ethical impacts of the development?An idea of the likely impacts will help us decide about the desirability or otherwiseof the development.

Before we deal with these questions, however, another fundamental issue needsto be addressed. Is the development of beyond-human-level machine intelligenceinevitable? Is it, for instance, part of the evolutionary journey that is bound to takeits course, irrespective of what we human beings do? If the deterministic evolutionaryargument for strong AI is to be accepted, then we may as well take the argumentto its logical conclusion, and abandon all discussion, which would be pointless inthe circumstances. Clearly, the argument of Ray Kurzweil [1], for instance, thatcomputers will supersede humans in the next step in evolution, cannot be accepted,as it would mean the abandonment of all responsibility.

On the other hand, Bostrom’s argument that the development of beyond-human-level AI is a strong possibility, can be accepted. It leaves the issue open for a rangeof political, ethical, and social choices, contentious discussions, and conscious action.

1 Some scholars and futurists may question the time frame, or be skeptical about the emergence ofAI at all. But in any case, Bostrom is suggesting only that the possibility cannot be dismissed.

789R. Kapoor / Futures 35 (2003) 787–791

According to Bostrom, the four immediate implications of strong AI are: the exist-ence of artificial minds in great numbers; the quick movement from human-level AIto greater-than-human-level AI; faster technological progress in various fields,including designing the next generation of machine intelligences, which may leadto super-intelligence, and; the creation of general purpose intelligent machines, whichwould be capable of independent initiative, and would thus be more appropriatelyviewed as persons rather than machines.

But what of the impacts on human society, politics, ethics? Bostrom mentions“wide-ranging consequences for almost all the social, political, economic, commer-cial, technological, scientific, and environmental issues that humanity will confront”but does not discuss these in any detail, other than the discussion of the AI-relatedtechnological advancements themselves.

The most important issue in pushing for AI, and its impact is democratic control.Who is taking the decisions about AI? Who is investing in it? And how much? Arethe developments in the hands of a few MNCs, governments, and well-fundedresearch centres? Who will benefit from AI? The US Defence Advanced ResearchProjects Agency, for instance, is funding projects to make thinking machines [2].Will AI help us deal with issues that affect the not so better off half of the worldpopulation? Will it help us to overcome poverty and hunger — 1.2 billion peopleliving on less than $1 a day [3], climate change, the destruction of natural ecosystemsand the problems of war, terrorism, and hatred?

AI, combined with the tools of genetic engineering and uncontrolled eugenics,could, in great probability, lead to new strains of genetically superior human beings,which will have dangerous consequences for human rights, equality, and democracy.This new oligarchy will be a grave global threat to democracy.2 In fact, althoughtranshumanism — which includes AI, mind uploading, nanotechnology, and geneticengineering as central tenets — claims to uphold the freedom of choice and freewill as one of its central values [6], it can equally be read as a political movementfor oligarchy, driven by the affluent, the powerful and the arrogant knowledge elite.Well over a billion human beings do not have the “freedom of choice” to have theirdaily bread and butter. Some two billion people do not have the freedom of choiceto use low-cost essential medicines, such as penicillin, that were developed decadesago [7]. Are these and numerous other disgraceful ‘un-freedoms’ due to any techno-logical constraints? And perhaps AI will help us overcome those constraints?

Second, AI, like the rest of science and technology, is valueless and soulless. Ithas no link with any human moral or spiritual values, so deeply held and regardedby the different religious communities of the world. In fact, AI and its advocatesviolate the limits and the notions that the religious and the spiritual-minded hold assacred. Kurzweil’s claim of “spiritual” machines [1] is specious, and trivializes andimpoverishes the concept, as Dembski argues [8]. The advocates of AI cannot simul-taneously play God and want their creations to seek another God!

Third, the push for AI is a reckless, irresponsible move, driven by the fascination

2 See, for instance, Van Doren [4] and Kapoor [5].

790 R. Kapoor / Futures 35 (2003) 787–791

for technology or by the urge to play God. It is inspired by the “religion of tech-nology”, which worships speed, magical power and technology as an end in itself,without being governed by any higher values.3 To describe how we are movingtowards AI and the “post-human” future, one can think of the analogy of a trainmoving at 200 miles an hour with no mechanism to either apply brakes, govern thespeed or stop the train. Destination: Unknown. What matters is only the thrill of theride! What we need is a mechanism to stop the train, to govern its speed, and, indeed,decide where we want to head as well as to enjoy the beauty of the landscape weare in!

The techno-fix, whether of AI, genetic engineering or nanotechnology, cannot helpus, or “evolution”, to achieve perfection. Just as we have to be first responsible forourselves before being responsible for others, or we have to move through adulthoodbefore we seek Godhood [10], post-human perfection will only be an illusion withoutfirst seeking to achieve human perfection. And the road to that lies through con-sciousness, wisdom,4 love, beauty, peace and compassion, not through “artificialintelligence”, the magical power and strength of technology, and the hubris of mod-ern man.

Unlike the learned Brahmins of the Panchtantra story, we hope that the learnedscientists and philosophers of today will be more than smart and skilful, and learnfrom the collected wisdom of human experience, so that they do not outsmart them-selves, and us all.

References

[1] R. Kurzweil, The age of spiritual machines: when computers exceed human intelligence, Viking,New York, 1999.

[2] Transhumanity magazine. DARPA wants thinking machines. www.transhumanism.com.[3] The World Bank. World Development Report 2000/2001: Attacking Poverty. New York: OUP, 2001.[4] Charles Van Doren, A history of knowledge: past, present and future, Ballantine Books, New

York, 1992.[5] R. Kapoor, The techno-brahmins and the futures of communication, in: R. Inayatullah, S. Leggett

(Eds.), Transforming communication: technology, sustainability and future generations, Praeger,Westport, 2002.

[6] M. Treder, What is a transhumanist. www.transhumanism.com. Sep 2002.[7] UNDP. Human Development Report 2001. New York: OUP, 2001.[8] W.A. Dembski, Kurzweil’s impoverished spirituality, in: Are we spiritual machines: Ray Kurzweil

vs. the critics of strong AI, Discovery Institute Press, 2002.

3 The historian David F. Noble has provided us with the insight, however, that religious themes andpreoccupations such as transcendence and salvation strongly influence and underpin the technologicalquest for AI, machine-based immortality, space exploration and so on [9].

4 In the words of the Indian writer Krishna Chaitanya, “The fruit of knowledge is poison only whenit is not transmuted by us into the pabulum of wisdom” [11].

791R. Kapoor / Futures 35 (2003) 787–791

[9] D.F. Noble, The religion of technology: the divinity of man and the spirit of invention, Penguin,New York, 1999.

[10] M.Scott Peck, The road less travelled: a new psychology of love, traditional values and spiritualgrowth, Rider, London, 1978.

[11] Krishna Chaitanya, The earth as sacred evirons, IIC Quarterly 19 (1 & 2) (1992) 35–48.