uni 1 comm assig final distributed ai robotics
TRANSCRIPT
-
8/8/2019 UNI 1 Comm Assig Final Distributed AI Robotics
1/4
December 16, 2009 Sergio Vilches Expsito, Group 29
Distributed Artificial Intelligence in Cooperative Mobile Robots:Swarm Robotics
Robot groups or colonies can exhibit an enormous variety and richness of behaviors which cannot
be observed with single systems.
So a group of small, cheap and simple autonomous robots with very limited cognitive capabilities
can execute sophisticated tasks that could be impossible or very difficult to accomplish by
independent or non-sociable robots.
In this paper I will describe how these simple machines can be used in real life, for example, in
unexplored terrain map creation, foraging tasks, box pushing and clustering, technical inspections,
structure formation, or any other activity in which a large number of agents is important in order to
cover a great surface or is needed a real-time change of the robot shape and characteristics.
In order to do this, first we are going to examine the concept of Artificial Intelligence (AI), and
then refer it to Multi-Agent Systems (MAS), creating the basis of knowledge necessary to
understand which problems are found in the development of these systems, and the different
approaches that can be followed to solve them.
First of all, it is necessary to define which are the targets of the research: physical agents . P. Maes
describes agents as a computational systems that try to fulfill a set of goals in a complex, dynamic
environment (1995, 135). In real life, this can be translated as autonomous robots with local views
(each one can see a part of the environment) working in a decentralized system (there isn't any
leader that guides the robots), and with social abilities (interaction with the environment and with
other robots).
So, with the on-board sensors of these robots, they can make physical measurements (light,
temperature, chemicals), detect objects and, the most important thing, be able to know their position
relative to other robots.
The next step is clear: use this sensing and intercommunication abilities to establish robot
ecosystems, this is, programming a social algorithm in each agent, so that they can develop tasks in
groups. A very interesting way to do this, as Cao, Fukunaga and Kahng point out, is to emulate
natural societies like ant colonies, which provide striking proof that systems composed of simple
1
-
8/8/2019 UNI 1 Comm Assig Final Distributed AI Robotics
2/4
December 16, 2009 Sergio Vilches Expsito, Group 29
agents can accomplish sophisticated tasks in the real world (1997, 21).
In these cases, each agent follows very simple rules and are highly reactive. So, in order to
execute a task, the main problem is how to distribute it among different agents (This is a major
problem in DAI, which is studied as Distributed Problem Solving).
In order to allocate tasks, agents must communicate through weighted request matrices , which
are based in a protocol known as Challenge-Response-Contract . It can be understood as a dialogue
among the whole system: First a "Who can?" question is distributed. Only the relevant components
respond: "I can, at this price" (This price depends on the availability of that robot to accomplish that
task). Finally, a contract is set up, usually in several more short communication steps between sides.
This idea, which appears to be very easy, is tremendously complex to study. Even in simple
environments, it is almost impossible to correctly determine the behavioral repertoire and concrete
activities of a Multi Agent System a priori, as it is based in internal interferences among agents, and
therefore impossible to be modeled.
So, if the reaction of any agent to a certain stimulus cannot be determined, how can the system be
programmed? Here appears the main key of Artificial Intelligence: learning.
If we give each robot the ability to adapt and to learn as an individual (isolated learning) and as a
group (interactive learning), the performance of the system will increase automatically with time,
and robots will be capable of dealing with dynamic changes.
AI learning has been a huge object of study during the past 50 years, but its application in Multi-
Agent robotics has been mainly developed by Liu and Wu. In their research, they use evolutionary
algorithms to achieve interactive learning. This process, which is based in Darwinism, works by
using digital genes.
Let's propose an example to explain this concept. In a test environment is placed a rectangular
box, a target and three robots. The objective of the three agents isto move the box from its starting point to the target, collaborating
in order to get the best performance. If we tried the same task with
only one robot, we would see that pushing a rectangular box is not
easy at all, because whenever the agent pushes in a direction
different from its center of gravity, the box would spin and not
move in the right direction. So, the labor of the three robots is rearrange themselves in order to push
the box evenly.
2
Drawing 1: Problems of box- pushing using one robot.
B o x
Robot
-
8/8/2019 UNI 1 Comm Assig Final Distributed AI Robotics
3/4
December 16, 2009 Sergio Vilches Expsito, Group 29
Their first tries are random, so the box may not move in the
right direction. The robots would notice it, and would try to
guess why. They will make a random change and look if there
has been any improvement in the movement of the box. If
that's the case, they will maintain that arrangement, but make
some minor changes. Of course, this genetic algorithm is
much more complex, but its explanation is out of the scope of
this essay. What we should keep in mind is that, by trial and
error, the system will lead to an efficient configuration, and
try to improve it in each generation . Illustration 1, shows the
effect of this evolutionary algorithm, developed by Liu and
Wu (2001, 161).
Another example of interactive learning can be observed in
another experimental case studied by the same authors. This
time, the aim is to create a map of an unknown environment
using a swarm of six microbots.
In this situation, communication between agents is required in order to be
able to scan as much terrain as possible in a certain time. So, robots will need
to agree between them in order to decide which part of landscape will be
scanned by each of them.
Illustration 2 shows three maps. The first one represents the environment
that Liu and Wu used in their experiment. The next one is a real map created
by mixing the data collected by each one of the six robots using a simple
predefined motion strategy. Notice that some figures are not well defined andthat there exist unexplored parts of the environment. In the last picture, the
robots are programmed with an evolutionary strategy, so, thanks to their
common agreements, they can scan a wider zone, and therefore, have less
errors in the map creation.
Until here we have explored the techniques, theory and practical uses of
Multi-Agent Robotics Systems. Notice, although, that this is a topic in active
research process, and that the technology of the agents (robots) is being
3
Illustration 1: Box-pushing trajectoriescreated by three group robots. The solid linecorresponds to the trajectory of the box ( ),whereas the others correspond to themovement traces for the three robots (*). At the beginning, the net pushing force of therobots results in a rather randomized motionof the box. After some generations of selection,a niche is fund, representing a globally near-optimal collective motion strategy. ( 1999
IEEE).
Illustration 2: Mapscreated by 6 autonomousrobots. 1999 IEEE
-
8/8/2019 UNI 1 Comm Assig Final Distributed AI Robotics
4/4