introduction to artificial intelligence 2nd semester 2015 ... · introduction to artificial...

Post on 19-Aug-2020

19 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Introduction to Artificial Intelligence2nd semester 2016/2017

Chapter 2: Intelligent AgentsMohamed B. Abubaker

Palestine Technical College – Deir El-Balah

1

Agents and Environments

• An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators

• A human agent has:• eyes, ears, and other organs for sensors

• hands, legs, mouth, and other body parts for actuators

• Robotic agent: • cameras and infrared range finders for sensors

• various motors for actuators

• The term percept refers to the agent’s perceptual inputs at any given instance.

2

Agents and Environments (cont..)

• Agent’s behavior is described by the agent function that maps any given percept sequence to an action.

• Agent function for an artificial agent will be implemented by an agent program

3

Agents interact with environment through sensors and actuators

4

Vacuum-cleaner world

5

The Concept of Rationality

• A rational agent is one that does the right thing

• The right action is the one that will cause the agent to be most successful

• Performance measure: An objective criterion for success of an agent's behavior

• E.g., performance measure of a vacuum-cleaner agent could be:• amount of dirt cleaned up, amount of time taken, amount of electricity consumed,

amount of noise generated, etc.

6

Rationality

• Rational Agent: For each possible percept sequence, a rational agent should selectan action that is expected to maximize its performance measure, given theevidence provided by the percept sequence and whatever built-in knowledge theagent has.

• The answer if a given agent is a rational agent, depends on four things:• The performance measure

• The agent’s prior knowledge of the environment

• The actions that the agent can perform

• The agent’s percepts sequence to date

7

Omniscience, Learning, and Autonomy

• Rationality is distinct from omniscience (all-knowing with infinite knowledge)• Rationality ≠ Perfection

• Rationality maximizes the expected performance

• Information gathering

• Rational agent does not require to gather information only, but also to learn asmuch as possible from what it perceives.• Learning

• The agent’s initial configuration could reflect some prior knowledge of the environment,but as the agent gains experience this may be modified and augmented.

• A rational agent should be autonomous• if its behavior is determined by its own experience

• become effectively independent of its prior knowledge

8

The Nature of Environments

• In designing an agent, the first step must always be to specify the task environment as fully as possible

• Task Environment (PEAS):• Performance Measure

• Environment

• Actuators

• Sensors

9

PEAS description of the task environment for an automated taxi

10

Properties of Task Environment

• Fully observable vs. Partially observable

• Single agent vs. multi-agent

• Deterministic vs. Stochastic

• Episodic vs. Sequential

• Static vs. Dynamic

• Discrete vs. Continuous

• Known vs. Unknown

11

Properties of Task Environment

• Fully observable vs. Partially observable• Fully observable:

• An agent's sensors give it access to the complete state of the environment at each point in time.

• the sensors detect all aspects that are relevant to the choice of action.

• convenient because the agent need not maintain any internal state to keep track of the world

• Partially observable

• because of noisy and inaccurate sensors

• or because parts of the state are simply missing from the sensor data

12

Properties of Task Environment

• Single agent vs. multi-agent• An agent solving a crossword puzzle by itself is a single agent environment

• Multi-agent environment

• Cooperative

• Competitive

• Communication

• Chess, taxi driving, soccer

13

Properties of Task Environment

• Deterministic vs. Stochastic• Deterministic

• The next state of the environment is completely determined by the current state and the action executed by the agent

• Crossword puzzle, chess

• Otherwise, it is a stochastic environment.

• Taxi driving, dice

14

Properties of Task Environment

• Episodic vs. Sequential• Episodic

• The agent's experience is divided into atomic "episodes"

• each episode consists of the agent perceiving and then performing a single action

• the choice of action in each episode depends only on the episode itself

• Assembly line

• Sequential

• The current decision could affect all future decisions

• Chess, taxi driving

15

Properties of Task Environment

• Static vs. Dynamic• Dynamic

• The environment can changed while an agent is deliberating (deciding on an action)

• Otherwise, it is a static environment

• The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does

16

Properties of Task Environment

• Discrete vs. Continuous• applies to the state of the environment, to the way time is handled, and to the percepts

and actions of the agent

• A limited number of distinct states, and discrete set of percepts and actions. Discrete

• Taxi driving is a continuous-state and continuous-time problem

• Known vs. Unknown• refers not to the environment itself but to the agent’s (or designer’s) state of knowledge

about the “laws of physics” of the environment

17

The Structure of Agents

• So far, describe agent by behavior:• the action that is performed after any given sequence of percepts

• The job of AI is to design:• an agent program that implements the agent function

• this program will run on some sort of computing device with physical sensors and actuators

• Agent = architecture + program

• The difference between the agent program and the agent function:• Agent program takes the current percept as input

• Agent function takes the entire percept history

18

Table lookup Agent

19

Agent program for a vacuum-cleaner agent

20

Basic kinds of Agent Programs

• Simple reflex agents

• Model-based reflex agents

• Goal-based agents

• Utility-based agents

• Learning agents

21

Simple reflex agents

• The simplest kind of agent

• These agents select actions on the basis of the current percept, ignoring the rest of the percept history

• Simple reflex agents have the admirable property of being simple, but they turn out to be of limited intelligence

22

Simple reflex agents

23

Simple reflex agents

24

Model-based reflex agents

• The most effective way to handle partial observability is for the agent to keep track of the part of the world it can’t see now.

• The agent should maintain some sort of internal state that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state

25

Model-based reflex agents

26

Model-based reflex agents

27

Goal-based agents

• Knowing something about the current state of the environment is not always enough to decide what to do

• the agent needs some sort of goal information that describes situations that are desirable

28

Goal-based agents

29

Utility-based agents

• Goals alone are not enough to generate high-quality behavior in most environments

30

Learning agents

31

END

32

top related