10 intelligent systems.pdf

38
10 Intelligent systems The question of whether a computer can think is no more interesting than the question of whether a submarine can swim. Edsger Djikstra, early programmer Overview AI is an abbreviation for artificial intelligence – a field of study that attempts to emulate human intelligence. To many people AI suggests that an attempt is being made to enable machines to think. This is the goal of some researchers. However the main thrust of AI research is into developing useful tools, and in developing the HCI so that computers are easier to use and hence behave in a more intelligible way. Typical AI tasks involve activities such as responding to a user in natural English, recognising faces, or assisting in finding answers. In this unit we will investigate: what is intelligence areas of AI expert systems the responsibilities in employing intelligent systems artificial neural networks robotics the possibility of artificial intelligence. What is intelligence? When presented with a problem some non-human creatures can come to a solution. Look at the following examples taken from studies into animal behaviour: a chimpanzee was placed in an area where there was a banana hanging well above her head; after staring longingly at the banana for a while the chimp began looking round; following some experimentation she found a box to stand on and a stick to hit the banana down with © Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Upload: mitchell-ingall

Post on 27-Dec-2015

50 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 10 Intelligent systems.pdf

10 Intelligent systems

The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.

Edsger Djikstra, early programmer

Overview AI is an abbreviation for artificial intelligence – a field of study that attempts to emulate human intelligence.

To many people AI suggests that an attempt is being made to enable machines to think. This is the goal of some researchers. However the main thrust of AI research is into developing useful tools, and in developing the HCI so that computers are easier to use and hence behave in a more intelligible way. Typical AI tasks involve activities such as responding to a user in natural English, recognising faces, or assisting in finding answers.

In this unit we will investigate:

what is intelligence areas of AI expert systems the responsibilities in employing intelligent systems artificial neural networks robotics the possibility of artificial intelligence.

What is intelligence? When presented with a problem some non-human creatures can come to a solution. Look at the following examples taken from studies into animal behaviour:

a chimpanzee was placed in an area where there was a banana hanging well above her head; after staring longingly at the banana for a while the chimp began looking round; following some experimentation she found a box to stand on and a stick to hit the banana down with

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 2: 10 Intelligent systems.pdf

Intelligent systems 333

in the wild, ravens have been observed to get grubs from inside of a log by using a twig held in their beaks to “spear” the grubs out of the hole

when a honey bee finds nectar she will return to the hive and dance in a series of patterns that indicate the distance and direction of the flowers; the other bees read this dance and fly directly to the flowers

dolphins when shown a range of actions and gestures by a human are able to mimic them; this remarkable achievement shows not only an awareness of what is required, but also the ability to translate movements into relative body parts (e.g. waggle left foot copied as waggle left tail fin).

Are these demonstrations of intelligence? The chimp solved a problem in a way not seen before, the raven used a tool, the bees understand symbols, and the dolphin translates from one form into another. They appear to involve the things we associate with intelligence, but are the animals intelligent?

Before we can answer this question we must determine exactly what we do mean by intelligence. This will be a first step in determining what artificial intelligence is.

Activity 10.1 – Who’s a clever boy then? 1. Would you classify each of the animal actions described above as exhibiting intelligence?

In each case give supporting reasons to say why or why not.

2. a By yourself, or as part of a small group, list the things that suggest intelligence to you; what are the things that humans do that are intelligent?

b From this list develop your own definition of intelligence. c Find definitions of intelligence in dictionaries or on-line and use them to amend your

definition.

3. a What is the difference between performing intelligently and actually being intelligent?

b Computer chess games can now beat all but the very best human players. Are the computers more intelligent than most people? Why or why not?

c The AI researcher Anatol Holt once said:

A brilliant chess move, while the room is filling with smoke because the house is burning down, does not show intelligence. If the capacity for brilliant chess moves without regard to life circumstances deserves a name, I would naturally call it “artificial intelligence”.

Explain the irony in Holt’s statement.

4. Howard Gardner in “Frames of Mind: The Theory of Multiple Intelligences” argued that every individual has seven different types of intelligence:

verbal/linguistic – being able to handle written and spoken language with ease visual/spatial – having a good sense of the shapes and spacing of objects body/kinaesthetic – being in good control of the coordination of your body musical/rhythmical – having a facility to work with music, sounds and beat

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 3: 10 Intelligent systems.pdf

Leading Technology 334

interpersonal – being able to relate well to others intrapersonal – having a good sense of self logical/mathematical – able to handle abstract and mathematical values.

He later extended these categories to include naturalistic – the relating of information to one’s natural surroundings

On the basis of Gardner’s theory a person could be highly intelligent mathematically, but unintelligent in interpersonal relationships, good at music, but poor at sport.

a Rate your own “intelligence” in terms of the nine categories (high, average, low).

b Gardner says “Intelligence is a capacity”. In terms of his theory what do you think this might mean?

c It has been suggested that cognitive/information processing (the ability to process information and relate it in memory) be added to the list. Is this part of intelligence?

d Gardner is considering adding existential (the ability to contemplate phenomena or questions beyond the senses) and moral intelligence to his categories. Do these fit in with the current categories?

e Gardner’s theory is not widely accepted nowadays Do you think the theory has some validity? Give reasons to support your point of view.

5. Faced with the prospect of making a plane flight at night in a wild storm would you be more confident if a human pilot was in control or a fully automated computerised system? Consider the scenario and say which you would prefer, giving reasons.

What is artificial intelligence? In the last exercise you formed a definition of intelligence. Your definition may have suggested that human intelligence consists of aspects of the following four components:

ways of thinking (strategies of thought) the ability to collect, store, organise and access knowledge seeing inter relationships between facts and then drawing inferences from them the ability to manipulate symbols (including sounds and written symbols).

Do you agree, or do you think some other aspects should be added?

If this is what human intelligence is, then what is AI? Marvin Minsky, one of the “fathers” of AI, suggested the following definition:

(Artificial intelligence is)... the science of making machines do things that would require intelligence if they were done by men.

This definition is however circular (as well as sexist) because it defines artificial intelligence by referring back to intelligence. A better definition from Sowa might be:

Artificial intelligence is the study of knowledge representations and their use in language, reasoning, learning and problem solving.

This refers to an important aspect of AI, the need for knowledge representation, a way of modelling what goes into the human view of the world. Despite this, it is not a good

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 4: 10 Intelligent systems.pdf

Intelligent systems 335

explanation of what intelligence involves and so tells us little about what an artificial intelligence might be.

Looking back on what is involved in human intelligence perhaps the following could be given as a definition of artificial intelligence:

AI is the bringing together in a human artefact the ability to think logically, to manipulate symbols, to communicate, and to collect, store and apply knowledge in a meaningful way.

The question now arises whether AI is possible – can something made by a human be intelligent? Clever programming and a sophisticated interface can make a computer appear intelligent, but is there a chance of the machine being intelligent in itself?

A group of AI theorists suggest electronic computers will eventually be able to do everything the human mind can do. This group, that has been described as strong AI or functionalists, suggest that when computers reach the level of complexity of the brain they will be intelligent and even conscious.

Another group believe there is something inherent in mind that will never be achieved by computers. We will return to this discussion later in this unit.

Areas of study in AI The definition given above suggests what an artificial intelligence might involve, but it does not indicate the aspects of research underway. Perhaps a better way of understanding what AI is, is to look at the areas that are being investigated and what the goals in these areas are.

As stated before AI is a very diverse field but investigations include research into the following.

Expert Systems An expert system is a computer program that mimics the decisions of a human expert in some area. The program asks a series of questions and then by following logic (copied from the human expert) the program comes to a supported conclusion. An example is a doctor’s intelligent assistant. After listing the symptoms of a patient the doctor will be told not only the likely ailment, but also a suggested form of treatment. Ultimately it is hoped these systems will be able to be used where the human expert is not available (e.g. in third world countries).

Natural Language This involves attempts to get the computer to talk to us. If we can develop a program that will enable us to talk freely to a computer to give it directions or to ask it questions, and then have it reply in our natural language (English) we will have gone a long way to breaking down the barrier to human-machine communication.

Robotics It has long been a dream of people to be able to produce a mechanical slave that will tirelessly do our bidding. We have actually reached this level of technology in many areas (e.g. car manufacture) but we still need to solve some key problems in control, programming and spatial awareness with mechanical devices.

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 5: 10 Intelligent systems.pdf

Leading Technology 336

Emulating Human Attributes In addition to general studies in robotics many attempts have been made to copy human senses such as vision, voice recognition and speech synthesis with varying degrees of success.

Game Playing and Problem Solving This area was widely investigated in the early days of AI research in the belief that the lessons learned from teaching computers the rules of games such as chess and checkers would lead to an understanding of human thought processes. While very powerful programs were developed (e.g. a computer has been world backgammon champion) this avenue proved to be a dead end. The logic of how we humans play games has little to do with everyday thinking.

Psychology By attempting to understand how the brain works some researchers have attempted to model the brain electronically. Of more importance others have used the investigation of how computers can be made to function to improve understanding of how the human brain operates.

AI Languages Early computer languages were machine oriented (machine code and assembly language). Human oriented languages (Visual Basic, Pascal, C++) are easier to understand but still do not model the way we think. Computer languages such as Prolog are an attempt to allow users to program their thoughts and reasoning about a problem directly into the computer.

Artificial Neural Networks ANNs are an area of study that has undergone a recent revival. The ability of ANNs to mimic the operation of the human brain has suggested new solutions to old problems in AI.

Many of the areas of AI research have now been investigated for over 50 years without much success (e.g. pattern recognition). Other areas such as robotics, expert systems, and ANNs are now being widely used. We will shortly look at these three areas in particular.

One point that has become very clear from the investigations is that the things people find easy to do are difficult to computerise (recognise faces, speak etc.) while the things we find difficult (calculus, massively repetitive tasks) computers accomplish with ease.

Activity 10.2 – Get smart 1. a What advantages would computers that could communicate in natural language have?

b Suggest some situations where they would have a distinct advantage over conventional computers.

2. a What is an Expert System?

b Give examples of some areas in which expert systems could be used.

c Give an example of an area in which an expert system should not be used. Support this with reasons.

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 6: 10 Intelligent systems.pdf

Intelligent systems 337

3. Pick one of form of emulating human activities, such as machine vision, voice recognition, or speech synthesis, and find out how successful attempts to achieve it have been.

4. Report on an example of where a game playing computer has beaten a human.

5. Why do you think some people might oppose the development of AI?

Expert systems A knowledge based system (KBS) is a specialised information system that is used to support human decision making, learning and actions. There are KBSs that can help a doctor diagnose a disease, assist technicians in detecting faults in electrical networks, that can generate geological maps for mineral exploration, that help managers with planning and scheduling, and many other applications.

A KBS is different from more general information systems in that they cooperate with human users to access a knowledge base. A knowledge base (KB) is not just data but is a collection of knowledge. This might be in the form of a set of rules, a series of articles, a list of facts, diagrams or pictures, or any other way knowledge can be represented. This KB will have been organised into a structure so that the knowledge is accessible, perhaps through a search engine or some other form of indexing.

A KBS does not passively reply to enquiries or supply preset reports, but rather works in collaboration with the human user to arrive at the required conclusion. While an information system handles data and uses repetitive algorithmic procedures to access the facts by a deductive method, the KBS handles knowledge and uses rules or heuristics to access the knowledge base by an inferential process.

In this section we will investigate one form of KBS, a rule based expert system.

An expert system is a computer program that acts like a human expert. An expert someone with specialised knowledge, skill or experience in a particular area, who can give information or advice, or solve problems. Experts we are familiar with include car mechanics, doctors, plumbers, accountants, and financial planners.

Usually the expert system operates by asking the user a series of questions. From the answers the system asks more questions until the computer can reach a conclusion or provide an answer to a problem. Typical expert systems work in financial services, medicine, accounting, process control, production and human resources.

Rule-based expert systems such as these are one of the most commercially successful applications of AI research.

The expert A family doctor is an expert we are all familiar with.

After years of study at university, internship at hospitals, and finally further years of experience in general practice, a GP can quickly suggest why you might be unwell. She can then prescribe some treatment or medication to help you. When we visit her she will not only ask questions about symptoms, but will also make some tests (e.g. take your temperature) and closely observe behaviour and actions (e.g. a tremor or paleness). Using her knowledge linked to the experience

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 7: 10 Intelligent systems.pdf

Leading Technology 338

of having seen many other people with just these symptoms she will select or reject facts to reach a conclusion as to what is wrong. She will then determine a suggested treatment to correct the problem.

Expert systems work in similar ways. The system asks a series of questions that the user answers. From each answer it narrows down its search until it can reach a definite conclusion. It does this by working from its knowledge base linked to ways to search through and reason from this knowledge base.

When an expert system reports its finding it will usually support the conclusion by showing how this was reached.

Architecture An expert system consists of a knowledge base of expertise and an inference engine, a way of getting at the knowledge. Both of these are usually accessed through an expert system shell.

The expert system model

The knowledge base (KB) is the expertise written in an accessible way, usually as a set of rules.

As an example of this look at the following rule-based conclusion from the MYCIN blood-disease-diagnosis ES:

if the infection is primary bacteraemia,

and the site of the culture is one of the sterile sites,

and the suspected portal of entry is the gastrointestinal tract;

then the identity of the organisms is bacteroids.

In addition a KB may consist not only of facts but also heuristics, the accumulated experience used by the original expert to reach his or her conclusion. An example of such heuristics in a financial ES might be the incorporation of factors such as credit rating (how well a client has repaid previous debts) into whether a loan can be granted.

The inference engine is the rule interpreter used to access the information in the KB. Because it is designed as a separate module a given inference engine can be used to work with different KBs. Not all inference engines work on a yes/no result. The answers given by users of the

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 8: 10 Intelligent systems.pdf

Intelligent systems 339

expert system can add weight by giving the user a range of values to choose from (e.g. 1. strong, 2. medium, 3. weak, 4. very weak, etc). Some inference engines also have the ability to apply a confidence factor (also called a certainty factor). In this way a user input can be given a percentage possibility of being relevant. To continue the financial example above, the fact that a client has repaid previous loans on time 90% of the time can be used to influence the chance of the loan being granted. A variation of confidence/certainty factors is the use of fuzzy logic.

An important part of the inference engine is the validation module. This will explain how the stated conclusion has been reached so that the user can check the reasoning and facts used by the system. Without a validation module to support the conclusions few people would accept the unsupported suggestions of a machine. If for example a financial expert system told an agent to sell all of his stock in a valuable commodity he would require some very good reasons before accepting the necessity of this course of action.

The shell is the interface and development environment for building and maintaining knowledge-based applications. It links the inference engine and validation module to the set of input rules in the KB. In HCI terms the shell provides a user-friendly, usually graphical, interface so that an expert system can be programmed and used. There are many shell programs available. Examples of shells include CLIPS, JESS, d3web, and eGanges.

Developing an expert system The most difficult and time consuming step in the development process is that the expertise must be obtained from an expert. The difficulty with knowledge acquisition lies in the problem many experts have in articulating (putting into words) their expertise. A mechanic for example may sense that something is wrong by listening to the sound of an engine. This feeling will be based on the mechanic’s knowledge, methods and experience. Collectively these are known as heuristics. To express the heuristics in a form that is suitable for communication to other people and to install into a computer is a highly specialised task.

This task is carried out by a knowledge engineer. Through interviews, demonstrations and detailed discussion a knowledge engineer will translate the expert’s heuristics into a representation that is suitable for further use.

The time spent on this stage slows down the whole process of development of the expert system and is known as the knowledge acquisition bottleneck. (A bottleneck is a narrowing of a passageway that slows down the progress of whatever is travelling through the corridor.) In developing an ES the knowledge acquisition is a slow, painstaking process that cannot be hurried and so acts as a choke point that holds up the rest of the process.

It is during the knowledge acquisition stage that the development of an expert system differs greatly from general software development. The knowledge engineer in addition to overcoming the acquisition bottleneck must also ensure that:

the information collected does not vary over time, e.g. with changing conditions the expertise collected is reliable the expert has made sound judgements about the relevant data the size of the knowledge base is reasonable.

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 9: 10 Intelligent systems.pdf

Leading Technology 340

Activity 10.3 – Who knows? 1. a What is an expert system?

b Give three examples of commercial expert systems.

c Suggest one area of human expertise that could not be included in a computerised expert system.

2. a Explain what the knowledge acquisition bottleneck is in your own words.

b The process of knowledge acquisition has been likened to mining. To extract copper for example, geologists must first select a suitable area to mine. Once the site is chosen bulldozers move in and strip off the overburden, the rock and soil above the seam. The mineral is then dug up and sent to a refinery where the metal is extracted from the ore. The whole process is difficult and time consuming and requires the use of specialist ability.

In what ways is the knowledge acquisition step of developing an expert system similar to the mining process? In your answer you may like to draw analogies with the knowledge acquisition process in developing a specific expert system, e.g. an E.S. to solve car faults.

3. Give examples of knowledge that might:

a vary over time b be unreliable

c involve judgements d be too large.

4. a Explain using a diagram how the various parts of an expert system fit together.

b Why is the validation module an important part of an expert system? Give an example of your own to illustrate.

5. Expert systems apply the KB they are based on rigorously, fairly and predicably, but are ineffective when they come up against something outside of their domain. As an example the very effective MYCIN attempts to interpret everything (headache, broken bones, pregnancy, etc.) in terms of bacterial infection. It has been found that the best way to employ expert systems is in close cooperation with a person in order to take advantage of the strengths of each.

a What factors make expert systems effective?

b Which human factors would balance the tendency of expert systems to be too literal and narrow?

c What would your reaction be to having to work with one of these “intelligent assistants”? Would you welcome the opportunities or might you feel threatened in some way?

6. The advantage a doctor has over an expert system is that he or she does not just call on knowledge and experience, but also on intuition. This might not be anything specific but based on a sum total of all experiences in the past.

What might be some of the types of problems an expert system might not be able to deal with?

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 10: 10 Intelligent systems.pdf

Intelligent systems 341

7. Expert systems in the future will be more powerful than those in use today. They will be used for many purposes.

For what tasks should they not be used, and why?

Decisions One way of representing the logic in an expert system is to show how conclusions are reached using a decision tree.

Say we wished to prepare an expert system to help foreign visitors recognise Australian coins. We might represent the questions, answers and conclusions in the following way:

size

small medium large

silver gold silver gold round not round

5c $2 10c $1 20c 50c

Coin recognition decision tree

Depending on the identification of the size of the coin, questions are asked about its colour or shape. Each answer will lead down the decision tree until a conclusion is found at the end.

The same information can also be represented on a decision matrix:

Size Colour Shape Coin

small silver round 5c

medium silver round 10c

large silver round 20c

large silver not round 50c

medium gold round $1

small gold round $2

The three columns on the left are called attributes or requirements to be satisfied. The column on the right is the conclusion.

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 11: 10 Intelligent systems.pdf

Leading Technology 342

Activity 10.4 – Decisions, decisions 1. Use this decision matrix to answer the questions that follow.

Solar distance *

Moons Gas giant

Larger than Earth

Probe ** Rings Planet

Y N N N N N Mercury

Y N N N Y N Venus

N Y N N Y N Mars

N Y Y Y Y N Jupiter

N Y Y Y Y Y Saturn

N Y Y Y N Y Uranus

N Y Y Y N Y Neptune

N Y N N N N Pluto

* Closer to Sun than Earth ** Visited by space probe

a Is Uranus a gas giant? b Identify a planet that has moons and rings and has also been visited by a space probe. c Which two planets are further from the Sun than Earth and are not large planets

mostly composed of gas? d Two of the planets are not “separated” (uniquely identified) by the attributes given.

Which are they? e Is any attribute redundant, i.e. not required to help identify the planets? If so, which?

If not, why not? f Give an example of information on the decision matrix that could change.

2. Develop a way to identify the family relationship a person has to the user, e.g. aunt, nephew, etc. Do this for just three generations.

The tree diagram has been started for you. generation previous same following line same different gender male female

brother sister cousin

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 12: 10 Intelligent systems.pdf

Intelligent systems 343

The completed branches of the decision tree show that if someone is of the same generation, in the same family line, then the relationship is the user’s brother if male, or their sister if female. If in the same generation but a different line then the person is their cousin.

Complete the decision tree to include mother, father, aunt and uncle from the previous generation, and daughter, son, nephew or niece from the following generation.

As an alternative present the family relationship as a decision matrix.

3. The All Sports and Social Club has the following rules to determine eligibility for membership.

Full membership of the club is only available to local applicants who are over 18 and who actively participate in a sport of some type, either as an administrator or player.

Local applicants who are not players but still active in the area of sport (e.g. coaches) may become social members.

Junior members must play a sport, while country and interstate applicants must be active in sport.

Represent this information as a tree diagram or decision matrix.

4. Develop a way to help Con the Fruiterer to identify a specific fruit and to suggest a way in which it could be used in a meal, snack or dessert.

To do this decide on the 20 fruits and make a list of them.

Next identify the attributes you will use to help identify the fruit; these will include such things as stone or seeds, edible skin, and so on.

Hint: If you use colour as an attribute make it one of the final attributes. Colours tend to be very specific for fruit, and choosing a colour early will not leave the user with many other choices to make.

Develop the rules and present them either as a decision tree or decision matrix

5. Prepare the decision matrix or tree for an expert system used to assist the Australian Film Classification Board based on the following guidelines:

videos and movies depicting sexual situations are rated ‘R’; in addition, films that are violent and involve either nudity or adult themes must also be rated ‘R’

other than films indicated as ‘R’ above, any movie or video with either violence in it or with any nudity is rated as ‘M’

if a film has passed the above tests but still explores adult themes or contains coarse language, then it must be classified as ‘PG’

all other films can be rated as ‘G’.

In your response use Y to indicate yes involved, N to represent not involved, and a dash ( – ) to indicate may be involved in determining the classification.

6. Prepare each of the above using an expert system shell.

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 13: 10 Intelligent systems.pdf

Leading Technology 344

SEI 10 – Who is responsible? We will now look at some of the social implications of using a rule based system

The use of expert systems bring with them many advantages. These include:

increased availability – expertise can be as close and as fast as a computer reduced cost – the initial costs of developing the system will be overcome by the

ability to mass produce the expertise reduced danger – systems can be placed in situations that might be hazardous to

people multiple expertise – the knowledge of many experts can be contained in one system consistent response – an expert system will be consistent and reliable and not affected

by outside influences such as stress handle complex problems – an expert system can ably handle situations in which

there are too many parameters for a human expert to cope with intelligent tutor – the system can be used to assist a student in developing expertise.

An example of the potential user of an expert system highlights some of these advantages.

In many of the poorer third world countries doctors are scarce. This is as a result of both the cost in time and money of training a doctor, and of finding one who is prepared to work in the hardships and danger of these places. The training of a local technician to manage a medical expert system may provide a solution to the problem. While the expert system may not replace a doctor, if used properly it could greatly reduce the load on existing doctors by taking over much of the routine diagnostics and treatment.

Using a partly trained technician in place of an expert raises the issue of responsibility. In the delegating of authority to a machine who is responsible if something goes wrong; is it the expert, the knowledge engineer, the manufacturer of the system, the salesperson or the user?

Two other relevant issues are: who owns the expertise in the system, and what are the responsibilities of the operators of the system? To explore these and other social and ethical aspects of expert systems your class could run the following role play.

Activity 10.5 – 3WD expert system role play

Preparation The following role play is designed to explore some of the social and ethical issues involved with expert systems. As set up there are 9 roles, however several of the roles could be taken by more than one person, or alternatively two roles can be compressed into one person – depending on class size. In addition it is a good idea to have two recorders who document issues raised and who report back to the class at the end of the exercise.

Each actor must read and think about their role before the debate. As with all role plays, each actor must forget their own opinions and biases and project the views of the character they are playing. Once the play is over there will then be the opportunity of expressing opinions, but in a situation where the point of view of others has also been expressed. A sign with the role and (fictitious) name should be prepared and placed in front of each character.

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 14: 10 Intelligent systems.pdf

Intelligent systems 345

Situation A group of people have come together to discuss the introduction of the Third World Doctor (3WD) expert system. The solicitor is the key person who will ask firstly the volunteer doctor and then the knowledge engineer to explain the background to the discussion. The solicitor will then raise one by one the issues and invite each of the other participants to state their point of view.

Volunteer Doctor For the last three years you have worked as a volunteer in Somalia with the aid agency Feed Them. During this time you have experienced at first hand not only the horror of the injuries, illnesses and starvation, but also the frustration of having too few doctors for far too many patients. You have returned to Australia to plead for more volunteers, but not many are prepared to give up comfortable practices to work in dangerous situations. The knowledge engineer has approached you for expert knowledge on the situation in Somalia which you have willingly given her. You will introduce the role play by describing your experiences.

Knowledge Engineer For the last three years you have been working with doctors specialising in treatment of patients in depressed areas. Your work has resulted in the development of 3WD that can be used in places like Somalia, the barrios of South America or the slums of India. (It could also be adapted for use in parts of Australia, especially for people who would prefer working with a dispassionate machine than a judgemental doctor.) You believe this is a major step forward in medical treatment because the system will combine the best of computers and people. It will bring together the knowledge of several experts in an easily accessed form, that could be used by a trained local technician. You are keen to see the system introduced but also want to be paid for the time you spent, and the skill you used in developing the system. You will describe the system, its benefits and how it could best be used.

Programmer The knowledge engineer has brought the design for her 3WD expert system to you, and you have developed it into a fully tested and working implementation. This has taken you six months. The system is very user friendly and it is easy to train even novice computer users in its operation. The system will be consistent and reliable and not affected by outside influences such as stress. As with all new technology you realise there will be opposition to it but you believe doctors and the public must move with the times. You are very proud of 3WD but wonder who is going to pay for your time and effort, and also if you will own the copyright on the system.

Marketing Company Owner The programmer and knowledge engineer have brought 3WD to you and you are very excited at its prospects. You can see many sales in third world countries at large profits to yourself. You are not really interested in the humanitarian aspects of the system but you are keen to gain exclusive rights to this system so that you can maximise your income. You realise the cost of the system may appear high but compared with the cost (time and money) of training a doctor it is cheap. The system can be made accessible immediately at a very reasonable price.

AMA Representative As a representative of the Australian Medical Association your concerns are with the rights of the doctors who are members of your organisation. You fear the spread of medical expert systems (such as 3WD) will undermine the profession by making treatment cheaper and perhaps by replacing doctors in some situations. In the future this might mean the lack of new doctors and limited development of new treatments. You also believe medical expertise cannot be replaced by a machine. Computers cannot make the judgements doctors can in face to face consultations based on intuitive or non verbal clues, and cannot counsel disturbed patients. The training and level of responsibility of the technicians also is an issue. Are they permitted to administer restricted drugs; are they to perform minor surgery; at what point do they call on a trained doctor; etc.? You are totally opposed to the use of medical expert systems.

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 15: 10 Intelligent systems.pdf

Leading Technology 346

Aid Worker You are just back from work in the refugee camps on the Thai-Cambodian border. You have seen the ravages caused by sickness and disease and the lack of doctors. You are willing to be trained in the use of 3WD and believe you could quickly train 20-30 other locals back in your clinic to use the system. With it you will bring relief to thousands, and save the lives of many. You are desperate to obtain the system and cannot see what all the delay is about. The sooner you can get trained and get back to Thailand the quicker you can start helping people. Unfortunately you have no money (your trip to Australia was for fund raising, and you have not had much success yet).

Politician As a member of the Overseas Aid Committee you are sitting in on this discussion to judge if 3WD is worthwhile (and safe) for the government to support. You are aware that the public has a great deal of sympathy for causes that have been promoted in the media, but have little knowledge of the real extent of third world problems. It is impossible to support every call for help and so while money should go to the most deserving cause, in reality it will go to the most popular electorally. This also means that the government cannot get caught up in any project that is at all controversial, as this 3WD may turn out to be. The AMA is a powerful lobby group and you must also listen to their concerns.

Bio-ethics Committee Member The Bio-ethics Committee is a group that has set itself up as a watchdog to ensure that research and developments in science do not violate normal standards of behaviour. You are concerned about the depersonalisation that may come about with the introduction of medical expert systems and also the possibility of a wrong diagnosis being made or out of date medical information being continued in the system. In the face of rapid progress in technology you are worried that basic human values are being lost such as the doctor-patient relationship. A good doctor does more than treat the ills of a patient, counselling is often as important. In addition you are worried the introduction of medical expert systems might be part of a trend to eventually replace all doctors with machines.

Solicitor Your role in this debate is central. You will begin by introducing the volunteer doctor and the knowledge engineer to give some background to the discussion. You will then raise the following issues and invite the participants to comment on and discuss them:

Who owns the expertise, is it the volunteer doctor, the knowledge engineer, the programmer, the marketeer or the person who purchases it?

Who is responsible if an error occurs, e.g. a wrong diagnosis and treatment leads to the loss of the life of a patient? Also who is responsible for keeping the K.B. up to date?

Will the use of medical expert systems lead to a breakdown in the doctor-patient relationship, and perhaps in time lead to the elimination of human doctors? (Alternatively is there a place for treatment that involves no value judgements?)

Who is going to pay for the introduction of the system (cost of expert system and training of technicians)?

What level of training and what level of responsibility should be given to technicians?

Should 3WD and other medical expert systems be introduced?

Follow up

As a follow up to the 3WD expert system role play express your opinion on paper to each of the issues raised by the solicitor, but make your answers more general referring to all expert systems.

The issues were:

1. Who owns the expertise?

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 16: 10 Intelligent systems.pdf

Intelligent systems 347

2. Who is responsible if an error occurs and who is responsible for keeping the K.B. up to date?

3. Will the use of expert systems lead to a breakdown in human relationships, or is there a place for systems that involve no value judgements?

4. What level of training and what level of responsibility should be given to expert system technicians?

5. Should medical expert systems be introduced?

Artificial neural networks Are conventional computers like the human brain? Consider the following:

the human brain works at a pace of milliseconds, computers at a pace of billionths of a second, yet we can recognise a face in a tenth of a second, a task that even the most powerful computers find difficult – why is this?

our brains are very fault tolerant, they can continue to function even with a minimum of correct data; for example we can recognise words wh-- s-m- p-rt- of th-m ar- m-ss--g.; computers are not fault tolerant and a powerful program can crash because of a tiny bug

apart from inherited instincts, most of what we know has been learnt – computers have to be programmed

the brain is a machine that builds itself, computers are assembled from manufactured parts.

So it seems that human brains are fundamentally different to traditional von Neumann computers and in certain areas have advantages over them. This leads to the question – can a different sort of computer be developed that works more like our brains do?

Artificial neural networks (ANNs) are an attempt to make a computational structure that is similar in its operation to the human brain. To understand how ANNs work we must first gain some understanding of how our brains work.

Neurons The human brain is a mass of specialised nerve cells called neurons.

The role of neurons is to process information by receiving and transmitting electro chemical signals. The dendrites of one neuron are near the ends of the axons of many other neurons. The gap across which the signal passes is called a synapse. This gap, or synapse, may be as small as 1/40 000mm, while neural signals can travel up to 120m/s!

A neuron is the equivalent to an electrochemical button. As one neuron “fires”, i.e. sends an impulse along its axon, it releases chemicals (neurotransmitters or neuromodulators) that cross the synapse to nearby dendrites or soma. These detect the neurotransmitters/neuromodulators and, depending on the amount or type received, may cause their own neuron to fire. In turn these neurons cause others near them to react. Some inputs into the neuron are excitory (the neuron fires more readily), while some are inhibitory (the neuron is less likely to fire).

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 17: 10 Intelligent systems.pdf

Leading Technology 348

The 100 000 000 000 neurons in the human brain are packed into a dense three dimensional mass of cells each having connection to between 10 and 1000 other cells. The connections between these cells, and the process of particular patterns of cell firings causing other cells to fire or not fire is how thought, learning and memory take place. Mental activity can be seen to be related to patterns or waves of signals passing through the complex neural network. (Incidentally this mass of 100 billion cells in our brains uses less power than a 100W light bulb.)

A neuron with magnified synaptic gap

While each neuron by itself is insignificant, the process of large numbers combining to work together on a task leads to the power of the brain. The human brain can carry out 1016 operations per second; supercomputers are only approaching 1010 operations per second.

It is believed that learning is a process of “burning in” pathways in the neural networks.

As some synapses become larger and deliver a stronger signal to the receiving dendrite, others become smaller and deliver weaker signals. The more often a path is used the better it is recalled. By repeating an activity over and over the neural pathway used for it is strengthened. (Think of memorising a phone number by repetition.) There is also research that suggests that dendrites can grow spikes to meet synaptic knobs to improve connections. Alternatively these spikes can shrink with disuse. (This might equate to forgetting.)

Researchers are trying to replicate the brain’s style of “parallel distributed processing” of neurons in a field called connectionism. One aspect of this field involves networks of artificial neurons.

A neural “pathway” © Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 18: 10 Intelligent systems.pdf

Intelligent systems 349

Artificial neurons The natural neuron is basically a processing unit communicating with other neurons; it is a binary device – it either fires or it does not. This is a relatively simple device to reproduce electrically. The artificial neuron, or node, consists of two (or more) inputs which are combined in some way so as to produce an output. input

output input An artificial neural node

Usually the inputs are weighted, e.g. input may be multiplied by 1.5, while input is multiplied by -2. Within the node these inputs are combined through a function to produce an output. A positive weight increases the chance of firing, a negative weight decreases the chance of firing. The resulting output (if any) is then used as input by other artificial neurons as their input (which in turn may be weighted). A combination of many of these artificial neurons forms the ANN. The components of a simple artificial neuron are:

input connectors which receive signals as numbers from other neurons; an activator which combines the effects of input values by weighting each to produce

the activation value; these weights can be adjusted; a transferor which reads the activation value and decides if the neuron should emit an

output signal, and if so how strongly; an output connector which conveys the output of this neuron to successor neurons in

the network. input connectors output activator transferor connector

Components of an artificial neuron

In an artificial neural network (ANN) these nodes are arranged in distinct layers.

input layer hidden layer output layer

A feed forward ANN with three layers (3-4-2)

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 19: 10 Intelligent systems.pdf

Leading Technology 350

The left hand layer acts as input, i.e. these nodes accept input data and (without processing) transmit it to the each node in the next layer. The right hand layer is the output. The layers of nodes between input and output are described as hidden.

The above network is described as a fully connected, feed forward network. All nodes are connected to all other nodes (fully connected) and all output from one node goes onto the next layer only (feed forward). There are other designs where not all nodes are connected, or where output is fed back into previous layers, but we will not investigate these.

Training a neural network Seemingly complicated problems can be solved with very few neurons in an ANN. Each individual neuron is weak, but because they work in teams, each sharing a bit of the job, they can solve problems conventional computers fail to do.

Neural networks are trained, they are not programmed like conventional computers. Training consists of repeated cycles of showing examples of sample data and adjusting the weights until the desired output is achieved. When a network is first set up the values of the weights on each node are random. These weights decide the final outcome of how a neural network responds to a particular set of input data. Changing the weights will change the final output value.

To train the network it is presented with a sample input, the ANN processes it and produces an output. The output is then compared with what was wanted, a target output. If the network output and the target output disagree (most likely) the difference (or error) is fed back through the network to adjust the weights to bring the output closer to the desired response. The sample input is then presented once more and the network output again compared with the required target output. Weights on nodes that get the ANN closer to the target output are strengthened, those that do not are weakened. This process is called back propagation, and is used repeatedly until the output is sufficiently close to the “correct” response.

This process of input, processing, output, comparing output with target data, and adjusting weights by back propagation will be repeated tens of thousands of times until the level of error is very small. At this stage the network has been trained and the weights are fixed. The network can then be used with new, unseen, input patterns and the output accepted as being within that error range.

As an example, we might want an ANN to be able to recognise the difference between healthy blood cells and those with sickle-cell anaemia. (This is a condition in which red blood cells cannot carry oxygen. Infected cells have a quarter moon, or sickle, shape.) In this case the input would consist of a set of values read from a microscopic device. For training, the ANN would be presented with a series of images of healthy, and images of diseased cells. After each presentation if the network did not correctly recognise one from the other the weights would be adjusted by back propagation.

The training process may have to be repeated thousands of times until the network could distinguish between healthy and diseased samples. When it can the weights are fixed. After this the ANN can be presented with a series of new blood slides to see if it can pick out the diseased samples. If training is successful the network can be used to take the place of human technicians freeing them for more specialised work.

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 20: 10 Intelligent systems.pdf

Intelligent systems 351

As a network is being trained it produces correct answers more and more often. An error value below 0.01 (1%) indicates a good level of training. Further training can continue to produce a lower error level. The process of the error approaching zero is called convergence. If in training the error value begins to rise the process is no longer converging. When this happens training must cease and the network returned to the weights of the previous lowest error value.

As it moves toward a preset target state, the ANN “learns” from its failures. This adjusting of weights until successful is called unsupervised learning. The human developer has no control over the process, it is automatic.

Hidden layers The final set of weights on each input on each neuron in an ANN is too complex to be understood, it simply must be accepted as working in achieving the set goal. While we can see the input and the output, the weights on the nodes in between are unknown. Because of this these inner layers are described as hidden.

In a trained ANN knowledge is stored in the total distribution of weights throughout the hidden layers and not in any single place. Unfortunately this has the disadvantage of having to take the network’s output on trust. This is of little import in that we are more interested in the ANN working than wondering how it works. (Compare this to an expert system where we can get the expert system to validate its conclusions.)

Human learning can be seen to proceed in a similar way to the training of an ANN. For example we might want to know how to do a subtraction sum, or how to hit a tennis forehand. Input from reading, teachers, experiences, etc., are processed by the neurons in the brain. If the correct output is not achieved we will continue repeating the input and practising processing it in slightly different ways until we get the sum right, or hit the tennis ball correctly. The more we practise the lower the error level.

The knowledge, skills and processes we have learnt this way are known as heuristics.

The role of ANNs One important aspect of an ANN is that it will not give an exact response. Any output is subject to an error level which will indicate that the required response is more right than wrong. This links into an area known as fuzzy logic in which there are no right answers, only answers that are less wrong than others.

ANNs work well in pattern recognition (an area traditional computers have had little success), such as face identification, screening pathology slides for malignant cells, and recognising military vehicles. This last application has received heavy military funding in the search for a way to automatically identify enemy tanks or planes.

ANNs also work well in non-algorithmic processing such as natural language processing, image processing, manufacturing, quality and process control, recognition of text and handwriting, and complex modelling.

The advantages that are provided with ANNs are:

speed – we can input a great deal of information at one time (parallel input), e.g. a picture may be input with each pixel delivered to a separate node

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 21: 10 Intelligent systems.pdf

Leading Technology 352

reliability – ANNs are very robust, they can still function at a high degree of accuracy even if parts have broken down; this is because all activities are mixed, and spread throughout the network; ANNs are described as being fault tolerant, or exhibiting “graceful degradation”, i.e. they do not fail suddenly

ANNs can handle incomplete or “fuzzy” information, especially with information on which they have not been trained or seen before

ANNs can “learn” from experience, i.e. weights can be adjusted to meet requirements more closely

generalising – ANNs can recognise not just what they have been taught, but also similar but new situations.

The disadvantages are that ANNs take a long time to train and the conclusions reached by an ANN cannot be explained, and the result has to be taken on trust.

As yet this technology is still in its infancy and has a long way to go to challenge conventional computing, but the successes achieved already point to a vital area of AI research.

Activity 10.6 – Added weight At this stage it might be instructive to work through some calculations on a trained network to see how input values are processed to produce an output.

1. To start with we will look at one sample node: +1.5 +2.0 -1.3

This node has weights of +1.5, +2.0 and -1.3 on the three input lines.

Suppose input signals of 1, 0 and 1 were received on the input lines. This would produce a total of: 1 x +1.5 + 0 x +2.0 + 1 x -1.3 = + 0.2

1 +1.5

0 +2.0 +0.2 1 -1.3

Calculate the total of the weighted input on the above node for the following input signals:

a (1, 1, 1) b (0, 0, 1) c (1, 1, 0) d (0, 1, 0)

2. This output total is called the activation value (AV).

Once the AV has been calculated it is passed onto the transferor. Here a mathematical function determines the output of this node. The type of function depends on the design of the network. A basic step-transfer function may simply give an output of 1 for a positive AV, and a value of 0 for a negative AV. For an AV of 0, output is 1.

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 22: 10 Intelligent systems.pdf

Intelligent systems 353

In the example above, which had an AV of +0.2, a step-transfer function would output 1 (because +0.2 is greater than 0).

What output would a step-transfer function give from the AVs calculated in Q1?

3. The following is a sigmoid transfer function. It is a variation on the step transfer function giving outputs in a range between 0 and 1. To find the output plot the AV from the horizontal axis down to the curve and then across to the vertical axis:

Using this function the AV of +0.2 from our example would lead to an output of 0.55.

Use the sigmoid transfer function to calculate the output from the AVs in Q1.

4. Now we will look at a simple network with two inputs, one hidden layer of two nodes, and one output node:

-2 A X 1.5 3.2 Z 0.6 -0.3 B Y 1.2

Assuming a step-transfer function we will calculate the output of the above network.

With an input of (1, 0):

XAV = 1 x -2.0 + 0 x +1.5 = -2.0 Xout = 0

YAV = 1 x +0.6 + 0 x +1.2 = +1.8 Yout = 1

These two now feed into Z:

ZAV = 0 x +3.2 + 1 x -0.3 = -0.3 Zout = 0

The resultant output is 0.

For the network above calculate the output for the following inputs with a step-transfer function:

-4 -3 -2 -1 0 1 2 3 4

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 23: 10 Intelligent systems.pdf

Leading Technology 354

a (1, 1) b (0, 0) c (0, 1)

5. For the network above calculate the output using the sigmoid transfer function given above in Q3 and an input of (1, 0).

6. Calculate the output from the following network for an input of (1, 0, 1) using a step-transfer function:

A P Weights: AP: 1.4 AQ: -2.2 AR: 1.3 Q AS: -0.5 BP: 1.5 BQ: -2.8 B Z BR: -1.4 BS: 3.0 CP: 1.9 R CQ: 2.5 CR: -2.2 CS: -0.4 C PZ: 2.0 QZ: -0.4 RZ: 1.3 S SZ: -1.0

Robotics The Encyclopaedia Galactica defines a robot as a mechanical apparatus designed to do the work of a man. The marketing division of the Sirius Cybernetics Corporation defines a robot as ‘Your Plastic Pal Who’s Fun To Be With’

Douglas Adams – The Hitchhiker’s Guide to the Galaxy

While the thought of robots brings to mind figures such as those in Douglas Adam’s books, or Star Wars’ R2D2 and C3P0, current robots are not as romantic. Most are industrial machines designed to be placed in one spot and do a fairly limited range of tasks. These devices are not intelligent. A robot instructed to paint a panel will spray the area whether there is a panel there or not.

A robot is a device that is re-programmable and multi-functional so that it can do a variety of tasks.

To be a robot a device must also have some degree of autonomy. This means the following devices cannot be classified as true robots:

dishwasher – single task remotely controlled vehicle – no autonomy traffic lights – re-programmable, but only performs a single task automatic doors – not re-programmable.

True robots can do a variety of tasks such as welding, painting, assembling parts, etc. This is achieved through changing the devices attached to them and through re-programming

The program the robotic device follows is stored in a computer. It is entered either as a series of steps or by using a specialised robotic programming language such as Forth. A variation on these is to use lead-through programming. With this a skilled operator has the robotic arm or device strapped to his or her arm. The operator then performs a task such as spray painting while the computer records the motions. Once the actions are in memory the robot can then imitate the process millions of times without tiring.

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 24: 10 Intelligent systems.pdf

Intelligent systems 355

Power source Most robotic devices are electrically or hydraulically powered.

Electrically powered robots are used for general purpose activities. This form of device is cheap and simple to build. They are also easy to run and maintain. There are however disadvantages to this power source.

Electrical robotic devices have a low power to weight ratio so that devices with any real power need to be very heavy. Electrical devices also tend to be limited to circular action only. Electric motors have a rotary motion and devices using them need special attachments to move an arm or part in a straight line. Finally this type of device has a limited starting and braking torque. When an electric motor starts it does not have much power to push. If the motor is stopped it cannot hold a steady position easily. Electric engines work through magnetic action and so starting strength and a steadily held stopped position are limited to the strength of the magnets. For heavy precise work robots are more often hydraulically powered. For these a compressor is used to place oil under pressure. Through hoses and the action of plungers the pressure in the oil can move the device or to work attachments on it. Hydraulic robots can move linearly (in a straight line) and have a high starting or stopping torque, meaning they can be held in position firmly. They are also self lubricating. While hydraulic robots are expensive, pneumatically powered devices (using air instead of oil) have most of their advantages but are cheaper.

Degrees of freedom Whatever the power source used, it is usually remote from the robotic device itself to give it freedom of movement.

Movement is classified as rotary or linear, and the number of directions it can act in are described as the degrees of freedom. As an example the human elbow has one degree of rotational freedom and no degrees of linear freedom.

Robotic arm

linear motion

rotary motion

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 25: 10 Intelligent systems.pdf

Leading Technology 356

The above robotic arm can rotate about its base, on its stand, and at its two “elbow” joints. Its jaws can move together or apart in a linear or straight line direction. In all this gives it five degrees of freedom

Mobile robots Many mobile so called robots today are not true robots as they are mostly guided by a person from a distance, e.g. a bomb disposal robot. In fact there are very few mobile robots apart from a group described as Autonomous Guided Vehicles (AGV).

An AGV will usually follow a track of some kind such as a wire or a reflective surface. They operate in factories or other plants and are controlled by a central computer to move parts or components around. Some are “smart” enough to avoid collisions although there are problems with the detection of some obstacles (a wall is easy to detect, but a table is 95% space).

Experimental AGVs include cleaning robots and security guards.

Effectors and sensors Most robots interact with their environment through end effectors and sensors. An end effector is a device that is fitted to a robotic arm to carry out a specific task such as paint spraying. A sensor is a device to detect what is around the robot or where its various parts are.

End effectors are sometimes called an actuators. There are end effectors for specific tasks including

gripping cutting welding drilling sanding routing spraying picking or placing holding special objects such as glass, and so on.

There are even end effectors for highly specialised tasks such as brick laying, sheep shearing or chicken plucking.

A true robotic device should be able to take different end effectors and, with re-programming, be able to carry out different tasks.

A robot may be fitted with sensors to detect its environment. These might include sensors to detect the things it must work with so that they can be picked up or positioned correctly or to find the objects around it so it will not bump into them. There are also position sensors so that it can judge where its various parts are.

A robotic device is designed to work on its own without human intervention. The “brain” of the robot has to be able to find the things it must work with, must be aware of where its parts are so they can be moved to the correct place, and do so without colliding with external objects.

There are a range of forms the sensors may take. These include vision (camera), sonic (sound), radar, infra-red, pressure (touch), and even speech recognition.

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 26: 10 Intelligent systems.pdf

Intelligent systems 357

The role of robotic devices Robots can do the unpleasant or dangerous or boring jobs rather than having people do them. Tasks suitable for robots include:

working in extreme heat spray painting in enclosed spaces moving heavy, awkward objects bomb disposal handling radioactive, poisonous or infectious materials repetitive, high concentration jobs tasks requiring extreme precision.

Activity 10.7 – Androids 1. a What is a robot?

b Which of the following do not fit the definition of a robot (with reasons):

i TV remote control ii smoke detector iii cruise control on a car iv traffic lights v fridge temperature control vi computer printer vii garden watering system on a timer viii dish washing machine ix alarm clock

c Identify a device in everyday life that you think could be classed as a robot.

2. Identify the type of movement and degrees of freedom for each of these robotic devices:

a b

3. What advantages do hydraulic motors have over electric motors for robots?

4. Give three examples of end effectors.

5. Suggest three situations where it would be better to use robots in place of people.

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 27: 10 Intelligent systems.pdf

Leading Technology 358

6. The University of W.A. conducted research into a sheep shearing robot called Shear Magic. Despite the ability to correct itself after mistakes, and artificial vision to measure a sheep’s size and shape, early versions could only take off 90% of wool in 15 min. After ten years the project was abandoned.

What do you think might have been the difficulties the developers were unable to overcome that can be accomplished relatively easily by human shearers.

7. Find answers to the following:

a What is a nanometre?

b What is nanotechnology?

c What role has Richard Feynman played in the development of nanotechnology?

d Identify three actual or possible nanomachines.

e Suggest some applications of nanotechnology.

f How are nanomachines manufactured?

g What role has Eric Drexler played in the development of nanotechnology?

h Many people fear nanotechnology. Explain the basis of these fears. In your opinion are these fears valid?

8. The Sony Corporation has produced a very popular “Entertainment Robot” called Aibo. Despite costing around $3 000 these robotic dogs sell almost as quickly as they can be produced.

a The Aibo is promoted as a “new companion”. Give some reasons why these devices might be so popular.

b Aibo comes with voice recognition software. To what uses could this be placed?

c Aibo 2 also comes with sensors for temperature, infrared, acceleration, pressure and vibration. Suggest a situation or activity that use could be made of each of these sensors.

d Aibo’s various parts have a range of degrees of freedom. Use a diagram to show the range of movements that Aibo could make given the following data:

mouth 1 degree of freedom head 3 degrees of freedom legs 3 degrees of freedom x 4 legs ears 1 degree of freedom x 2 ears tail 2 degrees of freedom

e Sony has not discontinued the production of Aibo. Find out why.

9. Issac Asimov in the short story ‘I Robot’ proposed the Three Laws of Robotics: 1. A robot must never harm a human being, or through inaction, allow a human to come to

harm.

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 28: 10 Intelligent systems.pdf

Intelligent systems 359

2. A robot must always obey a human being, unless this is in conflict with the first law.

3. A robot must protect itself from harm, unless this in conflict with the first two laws.

a Asimov envisioned a future with autonomous robots. Why do you think he suggested the need for these laws?

b What is meant by the term “unless this is in conflict with”? Why is it needed?

c The primary aim for an army in wartime is to kill. Is it possible for a military robot (even non combatant) to obey the laws? Why or why not?

10. Nero (Neuro-Evolving Robotic Operatives) is a game in which you train ANN-based robots to complete tasks.

Visit nerogame.org, play the game, and then report on the experience in terms of training, machine learning, and evolving AI.

Artificial intelligence Having looked at some of the applications of AI it is now time to investigate the background of AI and to question whether it is possible to create a non-human intelligence.

Pattern matching programs The earliest attempts at simulating intelligence were pattern matching programs such as ELIZA and PARRY.

ELIZA is the name of a computer program written between 1963-6 by Dr Joseph Weizenbaum at the Massachusetts Institute of Technology. Weizenbaum was a psychotherapist interested in the potential of computers to act in a human like way.

Dr Weizenbaum wanted to simulate the actions of human counsellors who get patients to talk about themselves by echoing back the patients’ words. The program did not understand what was said to it but would respond to the user’s input, trying to obtain more information.

Here is a sample conversation: USER: I’m worried about my brother

ELIZA: Tell me more about your brother.

USER: I feel he does not understand me.

ELIZA: Oh, why does he not understand you?

USER: We cannot talk any more.

ELIZA: Tell me why you cannot talk any more.

USER: We have lost contact.

ELIZA: Why do you feel you have lost contact?.........etc., etc., etc.

Although Dr Weizenbaum did not expect ELIZA to be taken seriously he quickly found people would respond to the computer in situations where they were too embarrassed to talk to a person. It is said his secretary on first trying the program asked Weizenbaum to leave the room so she could keep on interacting with ELIZA in private!

Sonny the robot in the movie version of

I, Robot

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 29: 10 Intelligent systems.pdf

Leading Technology 360

Another program from the same time was PARRY. In this the computer played the role of a 28 year old paranoid gambler who was being pursued by a vengeful Las Vegas bookie. PARRY would carry on a conversation with the user. In response to certain keywords it would grow more suspicious and aggressive and would steer the conversation onto the Mafia or the police. Almost anything wrong would be blamed on one or the other. Once mentioned it was easy for the discussion to revolve around one of them, continually increasing the program’s “anger” and “suspicion”. In conversations with six psychologists via terminals, three of the psychologists thought PARRY was human.

Here is a sample conversation with PARRY: USER: What do you do for a living?

PARRY: I work at Sears

USER: How much do you get paid?

PARRY: That is none of your business.

USER: You seem defensive. How do you feel?

PARRY: I am perfectly fine.

USER: Do you want to see a doctor?

PARRY: I don’t want to talk about that.

USER: Have you had any trouble with bookies?

PARRY: A bookie didn’t pay me off once.

USER: Oh, what happened?

PARRY: I had an argument with a bookie who did not pay off. Later on I remembered he had friends in the underworld and I thought he would use them to get even with me.

... etc.

Activity 10.8 – ELIZA and PARRY 1. a In computer terms what does AI stand for?

b What is the area of computer research AI is concerned with?

c Why is AI important in helping develop the HCI?

2. Why might a person prefer to talk to a computer about their problems rather than a person?

3. In what ways is a human counsellor better than a computer counsellor?

4. Suppose a computerised counsellor were developed and marketed. If the program gave poor advice leading to a deterioration in a patient's condition, who would be responsible and why; the patient who bought the program, the people who sold it, or the person who wrote the program?

5. Do you think human counsellors will ever be displaced by computer? Give reasons why, or why not, you feel this is a possibility.

6. If possible carry on a conversation with a version of ELIZA or PARRY and record you impressions.

Why do you think some people thought ELIZA (or PARRY) were real?

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 30: 10 Intelligent systems.pdf

Intelligent systems 361

7. Dr Joseph Weizenbaum, the creator of ELIZA was amazed and disturbed when he discovered that many people reacted to his program as if it were a real counsellor. Few people stop to think about whether the use of a computer is right or wrong. Do you believe the use of computers for counselling is unethical, or do they have some advantages over human counsellors? Support your position.

The Turing test ELIZA and PARRY were pattern matching programs that simply identified key words or phrases and responded to them. While there was no attempt at understanding, they did however fool some people into thinking they were interacting with another person. But what if there was some understanding behind a conversation a human had with a computer?

Alan Turing was a British mathematician who did much to develop the logic of early computers before and during the second world war. He wondered how far computers could develop and proposed a conversational test to establish when we could decide if computers had reached the level of intelligence.

Turing proposed a situation where a human judge is presented with two terminals, one of which is connected to a computer, the other to a person in another room at another terminal. The judge does not know which is which. The judge carries out a conversation through the two terminals with the human and the computer, each of which attempts to prove the other is the computer. If the human judge could not identify which was the person and which the computer then the computer had passed the test and for all practical purposes could be considered to be at the level of artificial intelligence. This test is now known as the Turing test.

Turing test of computer intelligence

This test first appeared in an article Turing wrote in 1950 (Computing Machinery and Intelligence) and has since sparked much debate. Turing himself described it as a test to detect if a computer could think (the term AI was not coined until six years later).

Alan Turing

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 31: 10 Intelligent systems.pdf

Leading Technology 362

Many have disputed whether this a valid test of thought. Some have suggested the test is too difficult. If the roles were reversed and the human had to prove he or she was a computer they would fail miserably (any rapid, complicated calculation would do). For a computer to prove humanness might be too strong and inflexible a requirement. An alternative might be for the judge to merely detect some conscious presence underlying the replies as a test of intelligence.

This latter view is taken by a group described as functionalists (Marvin Minsky and others). They suggest that any computer operation involves some level of consciousness. The mere carrying out of a step of an algorithm is an atom of consciousness, and that thousands of these steps together should therefore make up some a level of consciousness. After all, they claim, human consciousness is a product of thousands of neurons (brain cells) working together. (Minsky described the brain as a “computer made of meat”.)

John Searle, the American philosopher, opposes the functionalists (he describes them as Strong AI) and extended the Turing test with his Chinese Room concept. In this he imagines himself placed in a room with all of the necessary instructions for manipulating small tiles with Chinese characters on them. A sequence of Chinese symbols representing a story, and then some questions, is fed into the room through a slot. By following the instructions to handle the symbols Searle said he would be able to answer the questions and feed out a yes or no symbol as an answer.

Searle’s point is that the mere carrying out of the instructions does not imply any understanding has taken place. While he can arrange, group and respond to the symbols he will not have understood a single word of the story.

Searle goes on to say that if he were replaced by a computer that has rules of grammar, a dictionary, etc., it could also reply to questions in Chinese. However, he claims, that even if the machine can fool a Chinese speaker, it does not understand Chinese.

Functionalists suggest it is only a matter of time before computers do everything our brains can do, Searle claims computers are not essentially different from any mechanical device and will never reach this level. The debate continues and we will revisit it shortly.

Back to the Turing test. How could the judge determine which was the computer? The judge might try some nonsense questions to see if the computer could pick out the absurdity. Checking for a sense of humour by telling a joke is another possibility. Perhaps some questions could be asked that only a computer could answer (to avoid this the computer would have to pretend to be dumber that it was!).

Another method is to probe for understanding of the real world.

Roger Schank poses the following situation. A man is hungry and goes into a restaurant and orders a hamburger.

In one scenario the hamburger comes out burnt to a crisp, the man storms out without paying or leaving a tip.

In the second scenario, a good hamburger is delivered, the man is very pleased, pays the bill, tips and leaves.

The question in each case is, did the man eat the hamburger? To answer this question requires both a knowledge of the real world and an understanding of logical consequences. Perhaps questions like this might test Turing’s thinking computer.

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 32: 10 Intelligent systems.pdf

Intelligent systems 363

Since Turing’s day computers have developed enormously. While few people would say we have developed intelligent computers current computers could be programmed to pass the Turing test. This is an example of the so called receding horizons effect – as technology gets closer to a goal the criteria for judging achievement of the goal are shifted further away. We might no longer accept Turing’s Test as a measure of AI, but if someone suggests another test perhaps that too may be achieved without accepting it as AI as our expectations continue to grow.

Activity 10.9 – Thinking machines 1. a In your own words describe the scenario of the Turing test for machine intelligence.

b If a computer passes the Turing test what does this demonstrate? How?

c What questions would you ask if you were the human judge in the Turing test to determine which of the terminals in front of you was connected to a computer?

d Suggest an updated alternative to the Turing test that if achieved would indicate a computer had reached the level of AI.

2. Explain why functionalists believe one day we will be able to produce intelligent computers.

3. The AI philosopher John Searle proposed the Chinese Room scenario.

a Explain what the scenario involves.

b Searle suggests that the link between symbol and meaning will always be absent from machines and that mere symbol shuffling will never generate intelligence. Do you agree with his argument and why or why not?

c Searle further claims computers have nothing to teach us about thought. Do you agree?

4. (Artificial intelligence is)... the science of making machines do things that would require intelligence if they were done by men. Marvin Minsky

a Do you think Turing would have agreed with Minsky’s definition of intelligence?

b Which aspect of Minsky’s definition is contained in the Turing test?

5. Humans are intelligent but we do make mistakes – no one is infallible.

a If an “intelligent” computer never made mistakes how would it seem to humans?

b Why do you think we could not accept and work with an infallible “partner”.

6. The West German chess grandmaster Helmut Pfleger once played a simultaneous exhibition against a group of opponents, three of which, unknown to him, were computers. When told afterwards, he could not pick which were the computers, and one in particular amazed him with its style and level of play (it won).

Did these machines pass the Turing test, and should they be considered intelligent? Give reasons to support your answer.

7. Does the receding horizons effect mean we will never accept machine intelligence?

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 33: 10 Intelligent systems.pdf

Leading Technology 364

How we think There are three groups of people who are attempting to reproduce human thought processes in computers:

humanists investigate how people solve problems and then attempt to simulate these processes on computer

logicists base their study on formal mathematical logic as being more predictable and easier to model

structuralists believe the best way to simulate the workings of the mind is to reproduce the information processing capabilities of the brain (this group includes connectionists developing ANNs).

Between them these groups have investigated human strategies for processing knowledge and, while improving our understanding of reasoning, are attempting to build their theories into working programs.

The form of logic most used in algorithmic computers (e.g. if ... then ... else ... ) is classic (or Aristotolean) logic. This method is based on premises that follow rules to reach a conclusion e.g. a dog is an mammal, mammals are animals, therefore a dog is an animal. The process of determining if a conclusion is true or false, based on its premises, is known as inference or deduction. This form of logic is the basis of most current computers.

While the basis for most formal logics, this deductive logic does not always work well in the real world because of contradictions that can constantly crop up to alter the basic assumptions (e.g. birds fly, a penguin is a bird, therefore penguins fly). Non-monotonic logics have been developed that are more flexible in that they keep a track of which conclusions depend on which premises. When a contradiction is found the reasoning is traced back to determine which assumptions need changing or removing

Inductive logic is the opposite to deductive in that it works from examples to form premises or concepts. By looking at many specific instances of a thing or event common elements are abstracted that form the general rules. As an example, by looking at many different forms of dog the concept of “dogness” is gained. In this development three categories of attribute are considered, must-have (e.g. four legs, teeth), must-not-have (e.g. fur, plastic parts) and may-have (e.g. colour). Inductive logic, generalising on past successes and failures, is used in artificial neural network computers.

In between deductive and inductive is fuzzy logic. This form was originated by an engineer called Lotfi Zadeh in the early 1960s. In fuzzy logic data is expressed in terms of probabilities. As an example there might be a 10% chance that a 150cm person is considered tall, and an 80% chance a 180cm person is considered tall. Combining this with the probability the person is overweight we can determine the chance of whether the person can fit into a small car. This form of logic is used in expert systems and in ANNs. It is also very effective for controlling machinery and the Japanese especially have built fuzzy logic into washing machines, vacuum cleaners and even trains.

There are other forms of logic used in computers.

Case based logic works on analogy or example (somewhat like a more rigid inductive logic). In this form problems are solved by finding a precedent. Case based logic assumes a large

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 34: 10 Intelligent systems.pdf

Intelligent systems 365

database of previous solutions and the ability to recognise similarities. This method is used for some expert systems but is limited to well documented problems that can be stated clearly and simply.

Brute force logic is simply to try every possible combination until one is found that fits. Computer chess is a good example. The computer has no idea of what chess is or any of the strategies involved. What it has been programmed to recognise are the possible legal moves (about 30-35 in a given situation) and the elements of a good position (e.g. control the centre squares). The computer will then generate every possible combination of moves by itself and its opponent for up to 10 or 12 moves in advance and choose the next one best move. (Try 3512 on your calculator to see how much computation is required.)

A final form being explored is frame based logic. In this an attempt is made to intuitively link related ideas into frames. For example the concept of car to us carries ideas of wheels, body, driving, motion, roads, etc., etc. Frames may be linked by scripts e.g. the frame car plus the frame driver plus the frame coast can be linked into a script of going-on-holiday. While this is more a way to represent knowledge rather than a form of logic it is used to draw inferences and conclusions about the real world.

One attempt based on frames is where Dr Doug Lenat and colleagues spent years entering ten million pieces of information into a computer called Cyc. After ten years they claimed it had the knowledge of the world of a five year old. This may seem a small result for such a massive effort, but at five a human child is ready to start school and begin learning for herself.

Activity 10.10 – Think for yourself 1. How is the approach of structuralists different from that of humanists in attempting to

determine how thought processes might be reproduced in a computer?

2. Using examples of each, show how inductive and deductive logic are opposite to each other.

3. I can fit into my overcoat. My overcoat can fit into a suitcase. Therefore I can fit into the suitcase.

a What form of logic is this?

b Why is the conclusion false?

c How could this fault be rectified?

4. Fuzzy logic works on combining probabilities to determine the chance of something being correct or true. Why might this form of logic be suitable for controlling the way autonomous machines operate in the real world?

5. a Identify each of the other three forms of logic outlined above and give an example of each.

b Suggest real world situations where each could be used to control a computer.

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 35: 10 Intelligent systems.pdf

Leading Technology 366

The mind-body problem The brain is the principal organ that controls intelligent behaviour, it is the seat of the things that we associate with intelligence. These characteristics include, calculating, communicating, explaining, imagining, inventing and creativity, justifying, learning, manipulating symbols, reasoning logically, recognising, relating, remembering, thinking, understanding and visualising

All of these thoughts, ideas and memories however can be related to electro-chemical activity that takes place in the brain. They have a physical basis that is observable and measurable. By planting electrodes in a brain observers can monitor the flow of thoughts and ideas. They can measure things like awareness, excitement, joy and stress. On the other hand by using probes that generate minute electrical currents they can stimulate emotions, moods or memories. From these observations it appears that the brain is a biological machine made of neurons that runs on electro-chemical activity.

What however of the characteristics of “humanness” such as self-awareness, emotions, volition (free will) and cognition (ability to process knowledge). Are these linked to the physical processes that take place in the brain or are they part of something that is separate? Is there something unique that makes us conscious of ourselves as a human? Is there part of us that is separate to the brain? For some there is. This entity is referred to as mind.

There is evidence that there is a mind that is separate from the body:

The body can act on the mind. An example of this is when drugs can cause a state of altered awareness. The mind of the drug user experiences reality in a different way.

The mind can act on the body. One example of this is a case of psychosomatic illness where a person believes they are ill and the body experiences all of the symptoms. Another is the everyday experience of being embarrassed causing us to blush.

The mind-body problem is this, is the mind a function of a physical object, the brain, or is it something more, something non-physical?

This is a problem that has been the basis of philosophical discussion for hundreds of years. What is mind and, even, does mind exist? How can a material object (the brain) evoke consciousness? How can consciousness (mind) influence the action of physical objects?

To demonstrate these concepts we will look at the main theories of mind-body. Western philosophy Eastern philosophy

Materialism Dualism Idealism (rationalists) (interactionists) (phenomenalists)

These can be seen to exist on a continuum:

Materialism – This theory suggests that only the world available to our senses exists, if you cannot see or touch something then it is not real. By this view mind does not exist, only the physical body. The C.N.S. (brain and spinal cord) performs all the functions of thought and intelligent behaviour.

Idealism – The opposite theory, held in some Eastern traditions, is that matter does not exist at all, everything is a figment of mind. We imagine there is a world around us.

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 36: 10 Intelligent systems.pdf

Intelligent systems 367

Dualism – This is a half-way position, first proposed by the French philosopher René Descarte, that brain and mind are two entities. The brain is material (physical) and the mind is not (spiritual). There is no clear distinction between these two so that we cannot say where one ends and the other begins, or how intelligence and thoughts are shared between them.

AI researchers who believe machine understanding is possible work from a materialist or a functionalist approach. Functionalists disregard the discussion of origin and content of mind as irrelevant. They work from a basic assumption that mind works (functions) and from there seek to understand how it works by looking at its organisation, the way it is put together.

Your personal belief in the nature of mind will determine your expectations of the possibilities or limitations of AI. If you see self awareness, emotions, volition, etc. as simply attributes of a mechanistic brain then you will have to accept that eventually a computer may be developed so complex that it can match all of the processes of a human brain. If however you believe there is something inherently human about mind, then there is a chance that computers will never achieve a the full level of consciousness necessary for true intelligence.

These considerations have led to some very deep questions:

where do thoughts take place? what happens to the mind (soul?) after the death of the body? is consciousness mind, or is mind consciousness? does intelligence imply mind (and if computers achieve intelligence will they have

minds)?

Activity 10.11 – Theories of mind The following are topics for discussion in groups. Before you join a group however it might help if you place yourself on the Materialist/Idealist continuum. It is not necessary to belong entirely in one category, you can place yourself anywhere along the line you feel comfortable with.

Discuss the following:

1. Feelings, consciousness and inspirations are not physical, they cannot be weighed, looked at, measured etc. If they are not physical they are immaterial (spiritual). They are part of the mind, therefore mind is spiritual.

2. You, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will are in fact no more than the behaviour of a vast assembly of nerve cells. Francis Crick – Nobel prize winner for the structure of DNA

To a materialist all mental states must be reducible to physical states, i.e. thinking, creativity, joy etc. must be simply chemical reactions or depolarising waves passing along the neural circuits in the C.N.S.

René Descarte

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 37: 10 Intelligent systems.pdf

Leading Technology 368

Do you agree with this? Does “I am experiencing joy” equate to “The neural currents in my C.N.S. are fluctuating at a set level”? The emotion of joy does parallel neural events but is the private mental experience itself merely a function of the neural events?

3. How would a materialist explain non-physical phenomenon such as ESP (telepathic reception, clairvoyance) or psychic projection (telepathic transmission, telekinesis, astral travelling)? While the materialist might deny the possibility of these phenomenon there is strong evidence they do exist.

4. The spirit is willing but the flesh is weak. Matthew 26:41

What is a conscience, the feeling we get when the body wants to do something, but the mind says it is wrong?

5. According to the functionalist view, in principle it will eventually be possible to build an organic computer out of bits of matter such that the computer will not only be intelligent, but it will be capable of experiencing a “mental picture” of its thought; it will not only be capable of certain malfunctions but could actually feel an associated pain; it would not only demonstrate outward signs of happiness but would actually feel happy.

a Do you think this is possible?

b What moral responsibility would it place on an owner who bought a computer that was conscious? Would such ownership amount to slavery?

6. Humans are classified biologically as animals that have evolved along with all other animals in the world. An alternative view is that humans are something more, they are “rational animals”. This suggests that humans are the only animals with minds.

a What do you think are the necessary conditions for an entity to have mind. Consider cognition (knowledge processing), emotion, volition (control of actions), self awareness and consciousness.

b Do you believe no animals other than humans have minds, or all animals have minds, or humans and just some other animals have minds?

c To what level does mind exist (e.g. human, mammal, reptile, fish, insect, worm, amoeba, rock, silicon chip)?

d What survival value has evolution found in producing consciousness?

7. The operationalist point of view (basically Turing’s concept) is that a computer thinks when it acts indistinguishably from the way a human acts when she or he is thinking. Does intelligence require consciousness to exist?

8. Mind does not necessarily equate to consciousness. The unconscious mind is a powerful part of our intelligence. It can influence us through dreams, hallucinations, obsessions, Freudian slips and sudden inspiration. Does the unconscious mind have a separate awareness to the conscious mind?

9. (The brain) ... resembles nothing so much as a bowl of cold porridge. Alan Turing, mathematician

Must mind only be associated with biological devices? © Kevin Savage 2011 – Single user licence issued to Mitchell Ingall

Page 38: 10 Intelligent systems.pdf

Intelligent systems 369

10. The father of dualism, René Descarte, proposed “I think therefore I am”. Find out what he meant by this.

The twentieth century the neurobiologist António Damásio has challenged Descarte’s dualism, citing the role of emotions and feelings in cognition. His counter statement is “I feel therefore I am”. What do you think he meant by this, especially in relation to Descarte’s point of view.

11. Opposition to the development of AI generally follows lines such as:

fear – they will take over the world impossible – machines can not think no examples – no one has done it yet emotional – if intelligence is something mechanical then I am no better than a

machine ethical – it is wrong to develop non-human intelligence.

As a final question to conclude this discussion – what is your view on the possibility of machine intelligence? Be prepared to support this view with well thought out arguments.

Intelligence is in the eye of the beholder

© Kevin Savage 2011 – Single user licence issued to Mitchell Ingall