simple maze- solving robots solving search in real time

62
Maze- Maze- Solving Solving Robots Robots solving solving search in search in real time real time

Upload: teresa-crawford

Post on 22-Dec-2015

222 views

Category:

Documents


3 download

TRANSCRIPT

Simple Maze-Simple Maze-Solving Robots Solving Robots

solving search in solving search in real timereal time

On line and off line search

search Off lineRobot knows start and goal locations

Robot does not know the start and goal locations

Robot knows coordinates

Robot knows description, can recognize when seen

On lineRobot knows start and goal locations

Robot does not know the start and goal locations

Robot knows coordinates

Robot knows description, can recognize when seen

Robot

Has a map

Creates a map

Goals of this lecture

• Illustrate real-time search in maze by a simple mobile robot

• Investigate the capabilities of the NXT robot.

• Can we use Mindstorms NXT for serious research in Search?

• Explore development options

Problem Outline• Robot is placed in a “grid” of same-sized squares

– (Due to obscure and annoying technical limitations, the robot always starts at the “southwest” corner of the maze, facing “north”)

• Each square can be blocked on 0-4 sides (we just used note cards!)

• Maze is rectangularly bounded

• One square is a “goal” square (we indicate this by covering the floor of the goal square in white note cards )

• The robot has to get to the goal square

Using NXT you can build quickly all kind of robot prototypes

• Uses basic “driving base” from NXT building guide, plus two light sensors (pointed downwards) and one ultrasonic distance sensor (pointed forwards)

• The light sensors are used to detect the goal square, and the distance sensor is used to detect walls

Robot Design, cont’d

LightSensors

Ultrasonic Sensor

Robot Design, cont’d

Search Algorithm• Robot does not know the map.

• Simple Depth-First Search

• Robot scans each cell for walls and constructs a DFS tree rooted at the START cell

• As the DFS tree is constructed, it indicates which cells have been explored and provides paths for backtracking

• The DFS halts when the GOAL cell is found

Maze Structure

GOAL

START

DFS Tree Example

GOAL

START

DFS Tree Data Structure

• Two-Dimensional ArrayCell maze[MAX_HEIGHT][MAX_WIDTH]

typedef struct {bool isExplored; (= false)

Direction parentDirection; (= NO_DIRECTION)WallStatus[4] wallStatus; (= {UNKNOWN})

} Cell;

• Actually implemented as parallel arrays due to RobotC limitations

DFS Algorithm

while (true) { if robot is at GOAL cell

victoryDance();if there is an unexplored, unobstructed neighbor Mark parent of neighbor as current cell;

Proceed to the neighbor;else if robot is not in START cell

Backtrack;else

return; //No GOAL cell exists, so we exit}

Simple example of Simple example of robot traversing robot traversing

unknown labyrinth to unknown labyrinth to get to the goalget to the goal

Simple example

• Example 3x3 maze

GOAL

• We start out at (0,0) – the “southwest” corner of the maze

• Location of goal is unknown

• Check for a wall – the way forward is blocked

• So we turn right

• Check for a wall – no wall in front of us

• So we go forward; the red arrow indicates that (0,0) is (1,0)’s predecessor.

• We sense a wall

• Turn right

• We sense a wall here too, so we’re gonna have to look north.

• Turn left…

• Turn left again; now we’re facing north

• The way forward is clear…

• …so we go forward.– “When you come to a fork

in the road, take it.”–Yogi Berra on depth-first search

• We sense a wall – can’t go forward…

• …so we’ll turn right.

• This way is clear…

• …so we go forward.

• Blocked.

• How about this way?

• Clear!

• Whoops, wall here.

• We already know that the wall on the right is blocked, so we try turning left instead.

• Wall here too!• Now there are no unexplored neighboring

squares that we can get to.• So, we backtrack! (Retrace the red arrow)

• We turn to face the red arrow…

• …and go forward.• Now we’ve backtracked to a square that

might have an unexplored neighbor. Let’s check!

• Ah-ha!

• Onward!

• Drat!

• There’s gotta be a way out of here…

• Not this way!

• Two 90-degree turns to face west…

• Two 90-degree turns to face west…

• No wall here!

• So we move forward and…

• What luck! Here’s the goal.

• Final step: Execute victory dance.

Movement and Sensing

• The search algorithm above requires five basic movement/sensing operations:– “Move forward” to the square we’re facing– “Turn left” 90 degrees– “Turn right” 90 degrees– “Sense wall” in front of us– “Sense goal” in the current square

Movement and Sensing, cont’d

• Sensing turns out not to be such a big problem– If the ultrasonic sensor returns less than a certain

distance, there’s a wall in front of us; otherwise there’s not

– Goal sensing is similar (if the floor is “bright enough”, we’re at the goal)

Movement and Sensing, cont’d• The motion operations are a major challenge, however

• Imagine trying to drive a car, straight ahead, exactly ten feet, with your eyes closed. That’s more or less what “move forward” is supposed to do – at least ideally.

• In the current implementation, we just make our best estimate by turning the wheels a certain fixed number of degrees, and make no attempt to correct for error.– We’ll talk about other options later

Language Options

• There are several languages and programming environments available for the NXT system:– NXT-G– Microsoft Robotics Studio– RobotC– etc…

NXT-G

• Lego provides graphical “NXT-G” software based on LabVIEW which we’ve seen before

NXT-G, cont’d

• NXT-G is designed to be easy for beginning programmers to use

• We found it rather limiting– Placing blocks/wires on the diagram takes longer

than typing

• Furthermore, NXT-G lacks support for arrays, which is problematic for our application

RobotC

• Simple “C-like” language for programming NXT (and other platforms) developed at CMU

• Compiles to bytecode that is executed on a VM

• More-or-less complete support for NXT sensors, motors

RobotC, cont’d

• Limited subset of C– All variables allocated statically (so no recursion)– Somewhat limited type system• For example, arrays are limited to two dimensions, and

you can’t have arrays of structs as far as we can figure

– Maximum of eight procedures and 256 variables

Error Correction• So as you may have noticed, it doesn’t work perfectly.

• Ideally, the robot should always turn exactly 90 degrees and should always be exactly centered inside the square.

• As we said, the “movement primitives” – go forward, turn left, turn right – are not perfectly precise.– Any “slips” or problems with traction will throw everything off.

• Error tends to compound

Error Correction, cont’d

• To some extent, error is inevitable; the robot doesn’t really have “vision” per se.

• However, if we fudged the environment a little bit, it would probably be possible to correct for much of the error.

Error Correction, cont’d

• One possibility: Mark the floor of each tile with lines that can be picked up by the light sensors.

• If placed correctly, the “alignment markers” could help the robot both to center itself along the X/Y axes, and to make sure it turns exactly 90 degrees.

Error Correction, cont’d

• Another possibility: Use the ultrasonic sensor to make sure the robot doesn’t run into walls, even if it “thinks” it should still be moving forwards.

Sources

Peter DempseyPericles KariotisAdam Procter