artificial intelligence-balancing euphoria with reality: panel discussion, capri, italy, 28 may 1985

3
Feature Interview Artificial Intelligenc Balancing Euphoria e-ith Reality Panel Discussion, Capri, Italy, 28 May 1985 Participants: Bernard Meltzer, CCR-CCE, Ispra, Italy William Swartout, USC./Information Sciences Institute, Marina Del Rey, CA USA Luc Steels, Free University of Brussels, Brussels, Belgium Douglas Lenat, MCC, Austin, TX USA Alan R. Bundy, University of Edin- burgh, Edinburgh, UK Bernard Melzer (Moderator): I'd like to begin the discussion by stating a few issues in Artificial Intelligence we'd like to examine: First, what is the most serious type of damage the current at- mosphere may have on the future development of Artificial Intelligence? The second question, which is closely related, is, taking into account the enor- mous public interest and money for AI research that is currently available from firms and governments, what is likely to be the most useful and interesting development in the subject in the future? Bill Swartout: I'd like to start out with what I feel are some of the dangers that may arise as we are in the midst of the cur- rent high level of enthusiasm about AI. I think probably the greatest danger that we face is that there will be expectations about what we can accomplish that we will have no way of actually achieving. I think these problems stem primarily from three sources. First, there is a view that, since we are working on 'in- telligence', we can solve all the most dif- ficult problems. Many times I have seen problems that a group is trying to solve listed on a blackboard. There may be five different types of problems, the first four of which can be solved with fairly con- ventional techniques and the fifth of which no one in the group will have any idea of how to solve. Typically, people who are in managerial positions in this situation will say, 'This is the problem AI might help us solve'. I think it is impor- tant in selecting problems in artificial in- telligence to ensure that we do not get caught up in the general euphoria that the public may feel regarding what AI can ac- complish. Instead, we should try to limit our aspirations so that we don't wind up disillusioning the public. Another problem in AI concerns the commitment of too few resources when addressing a specific problem. Par- ticularly in the industrial sector, the danger exists that people will say they want to get into AI, but they only want to get into AI a little- to see how it goes. I think that, if you are going to get involv- ed in AI, you have to be willing to make a significant commitment to try to achieve results. Otherwise the critical mass of people will be lacking, and you won't be able to achieve good results. Finally, I think some of the problems regarding the results achieved in AI stem from a limited understanding on the part of the general public of what the state of the art is. When we say in AI circles that we are trying to understand natural language, or we are trying to help physi- cians perform diagnoses, everyone thinks they understand what this is, because they have been to a doctor; they unders- tand natural language, so they know what this is all about. But, unfortunately, they don't realize just how intractable some of these problems are, and they are not acquainted with some of the subtle issues that arise. It is very easy to make a jump from the description of the pro- blem to the feeling that the problem has in fact been solved. So it is important to point out that the state of the art has reached only a very limited stage in AI development. Regarding the contributions A1 is like- ly to make in the near future, I think it is useful to look at the history, at where contributions have come from in AI. One of the valuable things that AI has con- tributed is really a side-effect or a by- product of AI research. It concerns the tools we use to develop the large systems we create. AI people have traditionally developed some of lhe largest and most complex programs in existence. To do that, they have needed very good pro- gramming environments. So, for exam- ple, time-sharing arose from the desire to make computers available on a broad scale to large numbers of people. The sophisticated programming environ- ments that exist in i[nterlisp are another example of the by-products of AI research. In the near future, I think we will see some A1 expert systems actually being used. We are beginning to see that now. The ACE system, which is being us- ed by AT&T for cable maintenance, is one example. R-l, of course, is another excellent example of a system that is ac- tually starting to be used. Those two areas, of limited e>:pert systems and of enhanced programming environments for developing systems, are to my mind the two areas in which I see AI making some major contributions in the near term. Lue Steels: To address these questions, first from the bright side, I think it is clear that the current enthusiasm for AI comes from three basic sources, and I think all three sources are justified. First, many people have come to realize that AI is a good motor for the development of in- formation technology. Just like other do- mains have been motors, which means they pose challenging problems which have resulted in the development of new tools and some of these tools are useful for other domains. This has been covered, so I won't go into it further. A second source of enthusiasm is directed at expert systems. It may be a bit unclear why people are so enthusiastic about this area, but I think one of the reasons is that expert systems promise a new kind of ap- plication or new kind of problem domain that could not be handled with classical software engineering techniques. The third source of enthusiasm comes from 157

Post on 21-Jun-2016

215 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Artificial intelligence-balancing Euphoria with reality: Panel discussion, Capri, Italy, 28 May 1985

Feature Interview

Artificial Intelligenc Balancing Euphoria e-ith Reality Panel Discussion, Capri, Italy, 28 May 1985

Participants:

Bernard Meltzer, CCR-CCE, Ispra, Italy William Swartout, USC./Information

Sciences Institute, Marina Del Rey, CA USA

Luc Steels, Free University of Brussels,

Brussels, Belgium Douglas Lenat, MCC, Austin, TX USA

Alan R. Bundy, University of Edin-

burgh, Edinburgh, UK

Bernard Melzer (Moderator): I ' d like to

begin the discussion by stating a few issues in Artificial Intelligence we'd like

to examine: First, what is the most

serious type of damage the current at- mosphere may have on the future development of Artificial Intelligence?

The second question, which is closely

related, is, taking into account the enor- mous public interest and money for AI

research that is currently available from firms and governments, what is likely to be the most useful and interesting

development in the subject in the future?

Bill Swartout: I 'd like to start out with what I feel are some of the dangers that

may arise as we are in the midst of the cur- rent high level of enthusiasm about AI. I

think probably the greatest danger that we face is that there will be expectations about what we can accomplish that we will have no way of actually achieving. I think these problems stem primarily from three sources. First, there is a view

that, since we are working on 'in- telligence', we can solve all the most dif- ficult problems. Many times I have seen problems that a group is trying to solve listed on a blackboard. There may be five different types of problems, the first four of which can be solved with fairly con- ventional techniques and the fifth of which no one in the group will have any

idea of how to solve. Typically, people

who are in managerial positions in this situation will say, 'This is the problem AI

might help us solve'. I think it is impor- tant in selecting problems in artificial in- telligence to ensure that we do not get

caught up in the general euphoria that the public may feel regarding what AI can ac-

complish. Instead, we should try to limit

our aspirations so that we don ' t wind up disillusioning the public.

Another problem in AI concerns the commitment of too few resources when

addressing a specific problem. Par- ticularly in the industrial sector, the

danger exists that people will say they want to get into AI, but they only want to get into AI a little- to see how it goes. I think that, if you are going to get involv-

ed in AI, you have to be willing to make

a significant commitment to try to

achieve results. Otherwise the critical mass of people will be lacking, and you won ' t be able to achieve good results. Finally, I think some of the problems

regarding the results achieved in AI stem from a limited understanding on the part

of the general public of what the state of the art is. When we say in AI circles that

we are trying to understand natural language, or we are trying to help physi-

cians perform diagnoses, everyone thinks they understand what this is, because they have been to a doctor; they unders- tand natural language, so they know what this is all about. But, unfortunately, they don ' t realize just how intractable some of these problems are, and they are

not acquainted with some of the subtle issues that arise. It is very easy to make a jump from the description of the pro- blem to the feeling that the problem has in fact been solved. So it is important to

point out that the state of the art has reached only a very limited stage in AI development.

Regarding the contributions A1 is like-

ly to make in the near future, I think it is

useful to look at the history, at where contributions have come from in AI. One of the valuable things that AI has con-

tributed is really a side-effect or a by- product of AI research. It concerns the

tools we use to develop the large systems

we create. AI people have traditionally developed some of lhe largest and most

complex programs in existence. To do

that, they have needed very good pro- gramming environments. So, for exam- ple, time-sharing arose from the desire to

make computers available on a broad scale to large numbers of people. The

sophisticated programming environ- ments that exist in i[nterlisp are another

example of the by-products of AI research. In the near future, I think we

will see some A1 expert systems actually

being used. We are beginning to see that now. The ACE system, which is being us-

ed by AT&T for cable maintenance, is one example. R-l, of course, is another

excellent example of a system that is ac- tually starting to be used. Those two

areas, of limited e>:pert systems and of

enhanced programming environments for developing systems, are to my mind the two areas in which I see AI making

some major contributions in the near term.

Lue Steels: To address these questions, first from the bright side, I think it is clear

that the current enthusiasm for AI comes

from three basic sources, and I think all three sources are justified. First, many

people have come to realize that AI is a good motor for the development of in- formation technology. Just like other do- mains have been motors, which means

they pose challenging problems which have resulted in the development of new tools and some of these tools are useful for other domains. This has been covered, so I won't go into it further. A second source of enthusiasm is directed

at expert systems. It may be a bit unclear why people are so enthusiastic about this area, but I think one of the reasons is that expert systems promise a new kind of ap- plication or new kind of problem domain that could not be handled with classical software engineering techniques. The third source of enthusiasm comes from

157

Page 2: Artificial intelligence-balancing Euphoria with reality: Panel discussion, Capri, Italy, 28 May 1985

the idea that maybe through the results and techniques of AI, we will be better able to understand the mind. This is of more interest to cognitive scientists or philosophers and psychologists, but I think it is worth pointing out that there is a great deal of enthusiasm in those circles for the ideas of AI.

These are all good things, and I think all of these reasons for enthusiasm are justified. But, the reality is also that mastering this new information technol- ogy takes a lot of time and costs a fair amount of money. That is one problem, particularly in Europe, where money is not as readily available for advanced research or development.

A second reality is that the construc- tion of expert systems takes a great deal o f time and expertise. Again, many of the high expectations in this field probably cannot be met in the near future.

A third point of reality is that many aspects of the mind are not really being studied in AI. One example is un- conscious reasoning: no one is really working on it. I think people interested in that area of mental activity will be somewhat disappointed that although a set of tools exists these provide only limited answers.

The most potentially damaging prospect for AI in the near future is that if you view AI as an enterprise, it is an enter- prise with a very small research depart- ment. This research department is not growing because many of the excellent researchers of the past are now involved in more commericalized projects or into building up industrial laboratories and implementing ideas in industry. Not enough research is being done, and that will in turn cause AI to fail to meet expec- tations or will weaken AI ' s future development. I also think that there is a great deal of technological emphasis in AI at the moment , whereas I think we should also view AI as a science of knowledge. If we fail to focus on this aspect of AI, the well will dry up: fun- damental new ideas or new thoughts will not emerge.

In general, it seems that the enthusiasm surrounding AI is well-founded, but the most damaging prospect in the near future for AI is that the research department will keep shrinking and too few new ideas will emerge or too little exciting new work will be done to keep interest alive or to satisfy expectations that have been raised.

Douglas Lenat: I think the most serious danger AI faces will occur when we pick up the paper one day and the headline says that an artificially intelligent air traf- fic controller has been responsible for the deaths of 310 people. That could per- manently cripple AI in this century. On a lesser scale, the unfounded enthusiasm surrounding two-month-wonder expert systems will lead to inevitable disappoint- ment when they fail. This will also cause a backlash which could severely damage the field. A third problem I see is bound up with the term 'knowledge engineer- ing': this implies that we understand what we are doing well enough to call it 'engineering' . As Luc said, it is actually more a science and may in fact be more like an art or a craft. Calling it engineer- ing can lead to repeated instances of mismanagement.

On the positive side, I see a ground swell of support for AI which is causing money to become available and centraliz- ed efforts to be undertaken: ALVEY, ESPRIT, the Japanese effort, MCC in the United States, are enabling for the first time very large, very long-term pro- jects to be embarked upon which I think is one of the things that AI has lacked throughout its history. Most of our fun- ding has been very short-term, very mission-oriented or else almost com- pletely unfunded.

The other part of the issues presented for discussion concerned the payoff for the world, not just for AI, and I think my answer to that will perhaps be a little sur- prising. I believe that the fundamental payof f is a kind of amplification and enhancement of thinking. My own per- sonal feeling is that the most revolu- tionizing effect of AI in the next twenty years is going to be in the area of enter- tainment rather than anything else. By the end of this century we will have radically different ways of spending a third or a half of our waking lives than we do right now. Just as television had a dramatic impact, I think that the possibility for AI to involve people ac- tively in entertainment is staggering.

Alan Bundy: Most of the issues that I think would be damaging to AI have been discussed, but I would like to return to the tendency to see AI as a panacea for all ills. Until about a year ago, I used to regularly get telephone calls from people who told me they had a problem which

has been unsolved for the last hundred years, but they thought expert systems could help. Then they wanted me to tell them how. That seems to have stopped now and I don ' t know whether people just got fed up asking me or whether this is a sign of a growing maturity in expecta- tions. I suspect the latter. In my connec- tion with, for instance, people in UK in- dustry, I sense a growing maturity in ex- pectations of what expert systems' technology can do for them.

Now for the benefits of AI: I think the major contribution of AI has been in giv- ing us a whole range of programming techniques to add to existing techniques of computing. What characterizes this new range of techniques is, first, the abili- ty to represent knowledge-- that is, qualitative information, uncertain infor- mation, as opposed to precise algorith- mic information that it was traditionally possible to represent. Of course a great deal of the knowledge that we have as human beings is of this more unstruc- tured kind. Therefore the range of possi- ble applications for computer programs has increased tremendously. It is perhaps too early to say just how much it has in- creased, but some people suspect that the new techniques will come to dominate computer science.

A second characteristic of this new range of techniques is that they are very flexible; for instance, rule-based systems can be very modular and easy to update.

A third characteristic, particularly of expert systems, is the ability to provide explanations of their reasoning. I think this may be one of the most significant developments of all. One of my col- leagues in the UK likes to tell a story about medical diagnosis systems. He points out that Bayesian-based medical diagnosis systems have been around for a long time and are actually quite suc- cessful in their diagnoses. Yet these are not widely used among physicians. His explanation for this is that the physician ultimately has to take responsibility for the decisions these systems make, and unless the basis upon which the diagnosis was made is explained to the physician, he/she is not capable of taking the responsibility for that decision. So an explanation- based expert system offers the possibility of explaining its informa- tion processing to the user in such a way that he/she is capable of taking respon- sibility for it.

158

Page 3: Artificial intelligence-balancing Euphoria with reality: Panel discussion, Capri, Italy, 28 May 1985

Incidentally, I think this points to the answer to the problem Doug raised about the aircraft system which is responsible for hundreds of deaths. I think for many of these systems, we cannot forgo human responsibility. Humans must remain in control and take responsibility for any decision taken. What will make that possible is automatic explanation of the system's reasoning. I don ' t think that ex- isting explanation facilities in expert systems are at all adequate; instead they tend to be wallpaper systems. That is, they give you a huge amount of informa- tion which is completely indigestible. Still, this is the beginning of an explana- tion system. I think a great deal of research needs to be done on explana- tion.

It is also very important to think how the new programming techniques could be integrated into existing ones. The im- portance of hybrid systems has been discussed a great deal. One way of creating them is to provide intelligent front-ends, AI-based interfaces to ex- isting pieces of software, which allow a much wider range of u,;ers to use that system.

Bernard Meltzer: I would like to somewhat extend the prediction made by Doug Lenat about what he considers perhaps the most important medium- term positive effect of artificial in- telligence. He remarked that it will cause a revolution in our entertainment systems. I think one may take this a bit further. My view is that developments in AI combined with developments in tele- communicat ions are going to humanize the infrastructure of our industrial socie- ty. Think, for example, of a motor car which suddenly stalls. You still have to get out and open it up, but imagine being able to press a button and have it explain to you what is wrong and what needs do- ing. Or think of almost any large city to- day; hundreds and thousands of people get entangled in impossible traffic jams to get to work every day. Telecom- munications systems, applications of AI in manufacture, in office work, in ad- ministration, etc., could change that. A large number of people in the city won' t need to get into their cars in the morning to go to work. They can sit in a comfor- table sitting room or office with either a teletype or display in front of them and do their work there. These are only two

examples. There are many more which

demonstrate the possibility of a pro- gressive humanizat ion of the whole in- frastructure of our industrial society.

Are there any other comments that any of the panelists want to make?

Luc Steels: I would like to comment on the fear that we will one day read a headline saying that an AI program has killed 300 people. It is very important to question why this could happen. One of the reasons is that people ascribe human mental qualities to programs. But, perhaps by the time there really is an AI assistant doing air traffic control, there will have been sufficient growth in the relationship of people towards pro- grams. Perhaps this will have been altered by the availability of personal computers and by the fact that many peo- ple now learn how to deal with programs in schools so that they no longer accept as an explanation that the computer has failed. When the telephone company tells me that my bill is not correct because of the telephone company ' s computer, I do not accept that, personally. We can hope for a growing sophistication of people in the same way that they have grown more sophisticated towards other equipment- trains, for example, which used to frighten people a long time ago. Maybe we should help that process by calling AI programs 'knowledge amplifiers ' or something like that rather than, say, 'ex- perts ' .

Douglas Lenat: I would like to say that I hope Luc is right. But I think for the next several years anyway it is possible that a headline like that, which would sell newspapers, could appear. People still have a kind of Frankenstein fascination, a mixed fear, awe and misunderstanding of computers. I agree with Luc that the way to combat this fascination is educa- tion and what we may be able to expect f rom the next generation when it grows up. But, unfortunately, expert systems which will aid air traffic controllers are not something of the 21st century; they may be something of the early 1990s or even the late 1980s.

Alan Bundy: I think we can do more than educate the public, although I think that is also important . We can also help in the design of the systems. Perhaps the sug- gestion I am going to make is not ap-

propriate to air traffic control systems, because the decisions have to be made in a hurry. But other systems, like medical diagnosis systems or economic fore- casting systems, need not be black boxes which come up with magic answers. In- stead, we should build more exploratory systems, not just by providing better ex- planation facilities, but also by giving people a range of answers and encourag- ing them to explore a different series of inputs, for instance, or to explore dif- ferent scenarios, q-hey can try different possibilities rather than thinking that one answer is a magic answer which must be right because the computer said so. In that way we can encourage people to take much more responsibility for the systems they have.

Douglas Lenat: That just raised one more point, which is that a safe, indirect way to

help in even a dangerous industry like air traffic control or nuclear power plant safety is to build expert systems that help train the people who have to work in those jobs and make those decisions. I f we can do a better overall job of training those people, then we will still be improv- ing safety and performance without the kind of risk in the next few years that I am worried about.

159