development of intelligent vision fusion based

6
Pceedings of the 2005 IEEE Inteational Conference on Mcchaonics July -12, 2005, Taipei, Taiwan Development of Intelligent Vision Fusion Based Autonomous Soccer Robot 1 Chung-Hsien Kuo, Member, IEEE 2Chun-Ming Yang, and 3 Fang-Chung Yang, Student Mcmucr, IEEE I Graduate Institute of Medical Mechatronics, Chang Gung University, Tao-Yuan 333, Taiwan, R,O,C., e-mail: chkuo@mail.cgu.edu.tw 2.3 Graduate Institute of Mechanical Engineering, Chang Gung University, Tao-Yuan 333, Taiwan, R.O .c., e-mail: {sula.middle}@dcalab.cgu.edu.tw Abstract This paper proposes the hands-on implementations of the "Formosa middle size soccer robot". which is developed based on the intelligent vision fusion sensing architecture. Tbe proposed autonomous soccer robot consists of the mecbanical platform, motion control module, "MapCam360" omni-directional vision module, front vision module, image processing and recognition module, investigated target object positioning and real coordinate reconstruction, robot path planning, subgoal definition, competition strategies, and obstacle avoidance. Especially, in order to increase the robustness of the vIsion sensing, the catadioptric omni-directional and front vision modules are dynamically switched according to the ball recognition results of the omni-directional camera. The previous mentioned hardware and software modules and components are all integrated, and the control modes are fused based on the relationships among the positions of the enemy robot, ball, goal, and field boundaries. In order to increase the motility and attackability, we design the dribbling and cking mecbanisms. In addition. the PC-based soccer robot controller is developed based on the embedded Linux platform. The motion control uses the Mitsubishi AC-servo motors and controllers to achieve higher reliability. Finally, the physical robot had been finished and tested, and we also validate the functional performance of "Formosa Middle size soccer robot" in terms of participating the "2004 Taiwan's annual middle size soccer robot competition". Index Terms - Image servo tracking, image processing and recognition, omni-directional vision, intelligent vision fusion, middle size soccer robot. I. INTRODUCTION With the increasing developments of the artificial intelligence (AI) and the robotics, the robot soccer competition game was proposed to emulate the human's soccer game. The robot soccer touament was addressed and fonnulated based on the real-time and practical competition environment and problems. In general, the soccer robot integrates the techniques of the robotics, mechanics, control circuit design, motion controller design, image acquisition and processing, autonomous robot behavior modeling and programming, intelligent decision-making, etc. Therefore, the robot soccer touament is not only an interesting research topic, but also a challenging research pblem. Therefore, the robot soccer game can be fonned as a dynamic and unpredictable collaborative system. The RoboCup [12] and Federation of 0-7803-8998-0/05/$20.00 ©2005 IEEE Inteational Robot-soccer Association (FlRA) [9] are two major organizations to orgamze the robot soccer competitions. There are several literatures discussed the visions of the mobile robots. S.S Lin et al. [6] proposed a novel omni-directional stereo system to capture 3D depth in real time without additional restrictions on the operating environment. The proposed system achieved high resolution. It also promoted good resolution, data-flow, precision and speed to meet omni-directional stereo robot sensor demands. T. Schmitt et af. [7] developed and analyzed a probabilistic, vision-based state estimation method for individual autonomous robots. The proposed approach enabled a team of mobile robots to estimate their joint positions in a known environment, and the robots were capable of tracking the positions of autonomously moving objects. M . Dietl et al. [2] developed a new method to track multiple and single objects using noisy and reliable sensor data in tenus of Kalman filtering and Markov localization approaches. The results were compared with a simple averaging method to emphasize their contributions. P. Bonnin et al. [1] proposed a general methodology to introduce behavioral and visual priori knowledge and applied the approach in their works for the RoboCup quadruped legged league. Finally, J. Jia et al. [4] proposed the modular and object oriented approach for constructing the autonomous mobile robot system. This system was developed modularly based on the Windows operation system. The soccer robots used the omni-vision modular for the navigation purposes. The middle size soccer robot is developed as a fully autonomous mobile bot, and it has to complete the robot soccer game based on practical conces only by "itself'. That is, all the tasks, including image processing, object recognition, decision-making, path planning and bot driving, must be done on the robol. Therefore, in addition to develop the robot driving mechanics, the image acquisition and software computing modules must be developed as the keels of the middle size soccer robot. In this paper, the middle size soccer robot is implemented as the following hardware and software modules. I. Mechanical and driving modules: the mechanical design plays an important part for the soccer bot. In this work, we construct a strong mechanical structure as the bot platfonn. In addition, the industry pof AC servo driving modules are used to drive the soccer robot in a reliable 124

Upload: renzo-morales-nunez

Post on 12-Jan-2016

221 views

Category:

Documents


0 download

DESCRIPTION

Development of Intelligent Vision Fusion Based.

TRANSCRIPT

Page 1: Development of Intelligent Vision Fusion Based

Proceedings of the 2005 IEEE International Conference on Mcchatronics

July ]()-12, 2005, Taipei, Taiwan

Development of Intelligent Vision Fusion Based Autonomous Soccer Robot

1 Chung-Hsien Kuo, Member, IEEE 2Chun-Ming Yang, and 3 Fang-Chung Yang, Student Mcmucr, IEEE

I Graduate Institute of Medical Mechatronics, Chang Gung University, Tao-Yuan 333, Taiwan, R,O,C., e-mail: [email protected]

2.3 Graduate Institute of Mechanical Engineering, Chang Gung University, Tao-Yuan 333, Taiwan, R.O .c., e-mail: {sula.middle}@dcalab.cgu.edu.tw

Abstract This paper proposes the hands-on implementations of the "Formosa middle size soccer robot". which is developed based on the intelligent vision fusion sensing architecture. Tbe proposed autonomous soccer robot consists of the mecbanical platform, motion control module, "MapCam360" omni-directional vision module, front vision module, image processing and recognition module, investigated target object positioning and real coordinate reconstruction, robot path planning, subgoal definition, competition strategies, and obstacle avoidance. Especially, in order to increase the robustness of the vIsion sensing, the catadioptric omni-directional and front vision modules are dynamically switched according to the ball recognition results of the omni-directional camera. The previous mentioned hardware and software modules and components are all integrated, and the control modes are fused based on the relationships among the positions of the enemy robot, ball, goal, and field boundaries. In order to increase the motility and attackability, we design the dribbling and kicking mecbanisms. In addition. the PC-based soccer robot controller is developed based on the embedded Linux platform. The motion control uses the Mitsubishi AC-servo motors and controllers to achieve higher reliability. Finally, the physical robot had been finished and tested, and we also validate the functional performance of "Formosa Middle size soccer robot" in terms of participating the "2004 Taiwan's annual middle size soccer robot competition".

Index Terms - Image servo tracking, image processing and

recognition, omni-directional vision, intelligent vision fusion, middle size soccer robot.

I. INTRODUCTION

With the increasing developments of the artificial intelligence (AI) and the robotics, the robot soccer

competition game was proposed to emulate the human's

soccer game. The robot soccer tournament was addressed and fonnulated based on the real-time and practical competition environment and problems. In general, the soccer robot integrates the techniques of the robotics, mechanics, control circuit design, motion controller design, image acquisition and processing, autonomous robot behavior modeling and programming, intelligent decision-making, etc. Therefore, the robot soccer tournament is not only an interesting research topic, but also a challenging research problem. Therefore, the robot soccer game can be fonned as a dynamic and unpredictable collaborative system. The RoboCup [12] and Federation of

0-7803-8998-0/05/$20.00 ©2005 IEEE

International Robot-soccer Association (FlRA) [9] are two major organizations to orgamze the robot soccer competitions.

There are several literatures discussed the visions of the mobile robots. S.S Lin et al. [6] proposed a novel omni-directional stereo system to capture 3D depth in real time without additional restrictions on the operating environment. The proposed system achieved high resolution. It also promoted good resolution, data-flow, precision and speed to meet omni-directional stereo robot sensor demands. T. Schmitt et af. [7] developed and analyzed a probabilistic, vision-based state estimation method for individual autonomous robots. The proposed approach enabled a team of mobile robots to estimate their joint positions in a known environment, and the robots were capable of tracking the positions of autonomously moving objects.

M . Dietl et al. [2] developed a new method to track multiple and single objects using noisy and unreliable sensor data in tenus of Kalman filtering and Markov localization approaches. The results were compared with a simple averaging method to emphasize their contributions. P. Bonnin et al. [1] proposed a general methodology to introduce behavioral and visual priori knowledge and applied the approach in their works for the RoboCup quadruped legged league. Finally, J. Jia et al. [4] proposed the modular and object oriented approach for constructing the autonomous mobile robot system. This system was developed modularly based on the Windows operation system. The soccer robots used the omni-vision modular for the navigation purposes.

The middle size soccer robot is developed as a fully autonomous mobile robot, and it has to complete the robot soccer game based on practical concerns only by "itself'. That is, all the tasks, including image processing, object

recognition, decision-making, path planning and robot driving, must be done on the robol. Therefore, in addition to develop the robot driving mechanics, the image acquisition and software computing modules must be developed as the kernels of the middle size soccer robot. In this paper, the middle size soccer robot is implemented as the following hardware and software modules.

I. Mechanical and driving modules: the mechanical design plays an important part for the soccer robot. In this work, we construct a strong mechanical structure as the robot platfonn. In addition, the industry proof AC servo driving modules are used to drive the soccer robot in a reliable

124

Page 2: Development of Intelligent Vision Fusion Based

manner. Finally, we design the dribbling and kicking mechanism on the robot to increase the attacking functionality of soccer robot.

2. Image acquisition modules: the image acquisition is the major method to detect the object in the competition ground. To capture the overall image of the competition ground, the omni-directional vision modules are generally used. However, due to the serious image distortions of objects at far distance, a front vision module is additionally used to assist the image recognitions of the omni-directional vision module.

3. Software computing modules: the software computing modules are responsible for the intelligent vision fusion of the omni-directional and front vision modules, object (e.g., ball, goal, enemy, field boundary) recognitions, strategy making, path planning and motion driving. These software modules are implemented using the C language, and these software modules were executed on the embedded Linux based computer. By using the embedded Linux based platform, the embedded software codes can be configured as the compact flash (CF) memory card instead of the hard disk drive so that the system reliability can be significantly improved during moving and unpredictable collisions of the robot.

Finally, this paper is organized as follows. Section II introduces the design and implementation of the mechanical platform. Section III illustrates the image processing and recognitions for the intelligent vision fusion based image acquisition architecture. Section IV describes the competItion strategies and subgoal definition. The implementations and practical experimentations are discussed in Section V. Finally, the conclusions and future works are proposed in Section VI.

II. MECHANICAL PLATFORM AND DRIVING DESIGN

The mechanical platform is designed based on two-wheels driving mechanism. The industry proof AC servomotors are mounted on both sides of the robot symmetrically. The dimension and weight are designed according to the specification of the competition rules. In order to achieve the stability of motion, three idle wheels are employed.

In this work, the mechanical platform is designed using the Pro! E software [11]. All the mechanical components were produced in the machine shop of our university" The Pro! E mechanical platform model is designed as shown in Fig. 1. On the other hand, we also develop the dribbling and kicking mechanism to increase the attackability of the proposed soccer robot. The dribbling mechanism assists the robot to keep the ball with the desired path; and the kicking mechanism is capable of increasing the impulse when shooting the ball. Fig. 2 shows the computer model of the dribbling mechanism. The dribbling mechanism is designed using four rotating bars to increase the force of keeping ball.

Meanwhile, the sponges are coated on the top two rotating bars to enhance the friction ofkeeping ball. Additionally, the kicking mechanism is designed using the pneumatic and

spring driving mechanism. The pneumatic force is used here to retract the kicking bar as the initial state. When the kicking command is excited, the pneumatic force is released and then the compressed spring is desired to push the kicking bar so that the dribbled ball can be shot immediately. The shooting force is desired depending on the spring constant and the mass of the kicking bar. Fig. 3 shows the computer model of the kicking mechanism. The photo of physically assembled dribbling and kicking mechanism is shown in Fig. 4.

Fig.l Mechanical platform design of Formosa middle size soccer robot

Fig.2 Dribbling mechanism design of Formosa middle size soccer robot

The driving of the Formosa middle size soccer robot uses the Mitsubishi AC servomotor solution [10]. The motor is the "HC-KFS43" model, and the molor driver is the "MR-J2S-40A" model, as shown in Fig. 5. The driving signal of AC servomotor uses analog voltage signal (range: 0-10 Volts) to control the motor angular velocities. By applying different angular velocities of two driving wheels, the robot motions with different linear and angular velocities can be desired as shown in (l).

125

Vx(t) j Vy (I) =

(OB"" (I)

R cosB(w,(t) + w,(t» 2

R sin O(w, (I) + w,(t» 2

IR(W)tl-(O'(t» () ( ) 0

D ' (0, t (0, I>

R«(O,U) + w,(t» ( ) () 0 D • {JJ, t (0, I <

(1)

In (I), 8 is the robot angle corresponding to the horizontal

line; V xU) is the horizontal linear velocity of the robot; V:V (t) is the vertical linear velocity of the robot; D is the radius of the driving wheel; cv,.(t) is the angular velocity of the right

driving wheel; mIt) is the angular velocity of the left driving wheel; R is the distance of two driving wheels. The motion driving command is transmitted via an in-lab design serial communication interface board and the dual channel digital-to-analog moduleare used to control the AC

Page 3: Development of Intelligent Vision Fusion Based

servomotor drivers, as shown in Fig. 6.

Fig.3 Kicking mechanism design ofFonnosa middle size soccer robot

Fig.4 Photo of assembled dribbling and kicking mechanism

Fig.S Photo of AC servomotor and driving modules

Fig.6 Motion driving serial communication interface board

III. IMAGE PROCESSING AND RECOGNITION

A. Y-U-V Based Object Recognition

The robustness of the object recognitions for specific color patterns is crucial for the soccer robots. Due to the luminance varying of the competition ground and the distortion of the ornni-directional camera, it is impossible to use the typical RGB color format to recognize the objects

with specified colors. In general, the YUV color format is more feasible to analyze the images with lighting variations.

For the YUV image format [5], three components are

separated into three sub-images or planes. The YUV is different from RGB. The RGB contains the image information of three large color channels, and the YUV deals with one brightness or luminance channel (Y) and two color

or chrominance channels. When YCrCb data is being packed as a YUV packing, the Cr component is packed as U, and the Cb component is packed as V. The Y plane is configured as one byte for each image per pixel. The Cr and Cb planes are configured as half the width and half the height of the Y plane (or image). Each Cr or Cb contains the information of four neighboring pixels (a two-by-two square of the image). For instance, erO belongs to Y'aa, Y'O I, Y' I 0, and Y'II. Note that the YUV and RGB can be converted.

The color patterns of investigated object are recognized using the band-pass threshold. Due to different luminance condition of the competition ground, the band-pass

thresholds for all investigated color patterns and objects are determined before competition. To reduce the efforts on defining the YUV band-pass thresholds, the simple formulas are used. Initially, ten sample pixels with YUV formats for each investigated objects are acquired and recorded as sequences of (Yii, Uii' Vij), where i is the index of investigated objects such as ball, enemy, goal, and field boundary marker ;j is the sample number for the pixels (j = I to 10). Note that the relative positions of the investigated objects are crucial to determine the YUV band-pass thresholds. In general, the extreme cases are not acceptable.

Once the YUV data of sample pixels for each investigated

objects is acquired, the mean values of the sample data are calculated as (YMj, UM" VM,). The upper and lower bounds of the YUV band-pass thresholds can be determined experimentally in terms of adding the tolerance value. The tolerance value depends on the luminance conditions of the competition ground, and it must consider the robustness issues. By applying the vision fusions of the omni-directional and front vision approach, the YUV band-pass thresholds are individually determined.

The upper bound of the YUV band-pass thresholds is indicated as (YUj, UUj, VU;); the lower bound of the YUV band-pass thresholds is indicated as (YLi, ULi, VLi). The

formulas to determine the Y component band-pass thresholds are indicated in (2) - (4). Where i is also the index of interested objects; YMj is the mean value of component Y for object i. Y Ej is the tolerance value of component Y for object i. The U and V component can also be determined in

this manner.

YM,= fy/IO r=-I YU, = YM, + YE, YL,= YMi - YSi

B. intelligent Vision Fusion Architecture

(2)

(3) (4)

Vision is the most important sensor to detect the investigated objects with specified color patterns in environment for the soccer robots. Due to large competition ground of the middle size robot soccer game, the robot is hardly to detect all objects in the competition field. Using

traditional front vision module can just investigate the

126

Page 4: Development of Intelligent Vision Fusion Based

objects in the front of the CCD with a particular viewing angle. However, it cannot observe the objects except the particular viewing angle.

In general, the omni-directional vision solution is used for the middle size soccer robot. The omni-directional camera can create a view screen with 360 degrees around the observer (e.g., soccer robot). The photo of the omni-directional camera used in this work (with model MapCam360 model from the EeRise Corporation [8]) is shown in Fig. 7 (a), and the omni-directional image example is shown in Fig. 7 (b). Although the omni-directional camera can capture 360 degrees image around the robot, the seriously image distortion at the far locations (greater than 200 em in this work) is hardly investigated by the human or difficultly recognized by the computer program when the camera height is limited (80 em in this work).

(a)

(b). Fig.7 photo of omni-directional camera and image

Based on different image properties of the traditional front vision and the omni-directional vision cameras, we found their properties are complementary. Therefore, we design the intelligent vision fusion architecture to investigate all the objects in the competition ground completely. The intelligent vision fusion architecture is shown in Fig. 8. Due to the ball recognition being the most important task for the soccer robot, the ball recognition procedure is used to illustrate the proposed intelligent vision fusion architecture. In this architecture, the images are fused in terms of the strengths of individual camera properties to compensate their weaknesses.

Fig.8 Intclligent vision fusion architccture

In this architecture, the omni-directional vision acts as the

primary VISIOn sensor, and the front VISIon acts as the auxiliary vision sensor. The omni-directional camera is mounted on the top of the soccer robot with 78 cm height (not exceeded 80 em limitation). The front vision camera is mounted at the front of the robot with a particular viewing angle experimentally to investigate the ball clearly at the possibly far distance in the competition ground.

The image fusion architecture is executed initially to capture the images from the omni-directional camera, and the ball (or target object) is further recognized. If the ball is not recognized, then the ball may be either masked by the enemy robot or far away the robot. The masking problem is not discussed in the image recognition architecture. If the ball cannot be recognized from the omni-directional image, then the vision fusion controller switches to the front vision camera automatically. At this moment, the robot rotates itself to find the ball since the ball may be not at the front direction of the robot. Once the ball is recognized, the robot stops rotating and moves toward the ball straightly until the ball is recognized by the omni-directional vision. Based on the intelligent vision fusion architecture, the image recognition efforts are significantly reduced.

IV. COMPETITION STRATEGY AND SUBGOAL DEFINITION

A. Soccer Competition Strategy Since the proposed soccer robot is capable of dribbling

and kicking, the competition strategies are categorized as the "with dribbling" and "without dribbling" scenarios. The competition strategy is shown in Fig. 9. Either the "with dribbling" or "without dribbling" model, the ball must be recognized from the omni-directional camera instead of the front vision camera. If the ball is far away the robot (i.e., cannot be recognized by the omni-directional camera), then

the proposed vision fusion approach is executed to move the robot until the ball is recognized from the omni-directional camera. In this manner, the soccer competition strategy can be further executed.

127

DefltH."SLlbglJlllooingal Opprro;ne DireCliOIl �f from

'BaH 10 Gool wi�l1l1 SpeClfi�d o'm ....... -:

Fig.9 Soccer robot competition strategy

For the "with dribbling" mode, the attack procedure is very simple because that the distortion effects of the omni-directional image can be ignored. Fig. 10 shows the

Page 5: Development of Intelligent Vision Fusion Based

attack procedure of the "with dribbling" mode. In this case,

the robot is at position "Pos A" with direction "Dir A". The

ball is at position "Pos Ball". Initially, the robot rotates itself from "Dir A" to "Dir B" so that the ball is at front direction of the robot, and then the robot moves straightly toward the ball. When the ball is near (around 20 cm) the robot, the dribbling mechanism is automatically operated. Finally, the robot can

continuously keep the ball at position "Pos B". At this moment, the robot rotates itself again from current direction to direction "Dir C" so that the goal target is at front direction of the robot. Then, the robot moves straightly toward the goal

target until the robot is at front of the goal with a predefined

distance ("Pos C"). Note that this position is determined in

terms of comparing the goal area pixels. Consequently, the kicking mechanism is automatically operated in terms of controlling the pneumatic solenoid value to shoot the ball.

P",,:R.all

Fig.IO Attack procedure with dribbling function For the "without dribbling" mode, the attack procedure is

more complicated because that the distortion effects of the omni-directional image must be considered. Fig. II shows the attack procedure of the "without dribbling" mode. In this case, the robot is at position "Pos A" with direction "Dir A". The ball is at position "Pos Ball". Because it is not easy to keep the ball when the robot rotates with respect to a point without the dribbling function, the soccer robot just pushes

the ball with the desired direction to the target. For this

reason, the "without dribbling" mode defines the sub goal for the shooting ball purposes. The subgoal is defined at the

opposite direction of from ball to goal target with a specified distance (Do), as indicated in "Pos B" in Fig. II. In this manner, the positions of suhgoal, ball and goal target become

collinear.

Fig. I I Attack procedure without dribbling function Due to the huge distortion of the omni-directional images,

the planner relative coordinates of the ball and goal target are

reconstructed to calculate the Subgoal, as shown in Fig. II. At the same time, the planner coordinate of the subgoal is converted to the omni-directional image coordinate for display. Note that the coordinate conversions between the

omni-directional image coordinates and the planner coordinates uses the curve fitting approaches [3]. Based on practical experimentations, the parameters of conversion equations are converged after the training of 20 data. The equations are shown in (5)-(6).

Dp = 0.000577 D/ - 0.16192Dl + 15.9966D; - 487.564 (5) D; = 9.03xlO�6 D/ - 0.00609D/ + 1.48637Dp+22.1253 (6) Where Dp indicates the real distances of planner distances

(in em units) and the D; indicates the pixel distances of the omni-directional images. When the robot reaches "Pos B", it rotates itself until to direction "Dir C". The robot then moves along "Dir e" to touch the ball, and further to push the ball toward to the goal target. Consequently, the ball reaches the goal. Finally, it is noted that the previous described competition strategies are executed based on the one v.s. one

robot soccer competition. The team competition can be further modified to satisfy the practical issues. In addition, the obstacles can be avoided in terms of changing the subgoal dynamically based on the pre-defined rules.

V. IMPLEMENTATIONS AND PRACTICAL

EXPERIMENT A nONS

The "Formosa middle size soccer robot" is physically implemented. In order to increase the system reliability, the embedded Linux platform is used. The proposed system is developed on the EeRise PanoServer 3000 machine [8]. The

EeRise PanoServer 3000 is a PC based computer with Intel Pentium 4 2.4GHz processor. The operation system is the

Linux 2.4.20. In addition to supporting the embedded platform, the PanoServer 3000 is configured with for

individual vision input channels and eight general-purpose-input-output (GPIO) nodes to support the soccer robot functions. The software modules are developed using the C language� The photo of the completely assembled soccer robot is shown in Fig. 12.

The functions and approaches propose in this paper were tested and verified in our laboratory. Note that the lighting condition was poor and inconsistent during experiments. The experimental results are proposed in this paper. Fig. 12 shows the photo of the vision fusions of the ball far away the robot (left hand side), and the automatically switched front

vision camera (right hand side). The ball in the front vision camera is very clear and easily recognized. In addition, the ball and goal recognitions were also verified, and two cases are shown in Fig. 13.

Meanwhile, the curve fittings for the conversions of the omni-directional and planner coordinates are also practically verified using equations (5)-(6). Fig. 14 shows comparisons of the reconstructed planner distances from the omni-directional images and the real measured distances. The reconstruction error is around 19.14 em for 20 test data,

128

Page 6: Development of Intelligent Vision Fusion Based

and they are acceptable for the middle size soccer robot The reconstructed planner coordinates can be further used to define the subgoals for the "without dribbling" mode, as shown in Fig. 15. Note that the subgoal is marked using the black cross ("+") marks, and they are plotted using the converted planner coordinates as indicated in equation (6).

Fig.12 Assembled Fonnosa middle size soeeer robot

Fig. 13 Omni-direetional and front images

Fig.14 Ball and goal recognitions

I 2 4 ) tJ f � I) 10 II 12 1.- 14 I� 16 1/ tii. lli D P-Ol::d

"""""--ITdl tii�1 -rec0ller�.dldJ<;t

Fig.15 Comparisons of the reconstructed planner distance and the rcal measured distances

VI. CONCLUSIONS AND FUTURE WORKS

In this paper, the "Formosa middle size soccer robot" is introduced, and the hands-on implementations including robot mechanics platform with dribbling and kicking

129

functions, intelligent vision fusion architecture, curve fitting based conversions for the omni-directional and planner coordinates, competition strategies for "with dribbling" and "without dribbling" modes were developed and integrated. Especially, by introducing the intelligent vision fusion approach, the image recognition efforts are reduced, and the image recognition system become more robust. Finally, we also validate the functional performance of "Formosa Middle size soccer robot" in terms of participating the "2004 Taiwan's annual middle size soccer robot competition".

Fig, 16 Subgo�ls for four eases

ACKNOWLEDGEMENTS

This work was supported by the Ministry of Education, Taiwan, R.O.C., under the 2004 Student Hand-On Competition Project.

REFERENCES

[I] P. Bonnin, O. Stasse, V. Huge!, P. Blazevic, "How to Introduce a Priori Visual and Behavioral Knowledge for Autonomous and Mobile Robots [0 Operate in Known Environments," fEEE International Conference on Emerging Technologies and Faclor)' Automation, Vol. 2, pp. 409-4 I 8, 200 I.

[2] M. Dietl, 1.-5, Gutmann, B. Nebel, "Cooperative Sensing in Dynamic Environments," IEEEIRSJ International Conference on Intelligent Robots and Systems. Vol. 3, pp. 1706--1713,2001.

[3] C.F. Gerald, P,O. Wbeatley, Applied Numerical Analysis, Addison-Wesley, 1999.

[4] ), Jia, W, Chen, y, Xi, "Design and Implementation of an Open Autonomous Mobile Robot System," IEEE In/emational Conference on Rubotics and Au/umation, Vol. 2, pp, 1726-1731,2004,

[5] A,D, Kulkarni, Artificial Neural NefworksfiJr Image Understanding, Van Nostrand Reinhold, 1994,

[6] 5,S Lin, R. Bajcsy, "High Resolution Catadioptric Omni-Dircetional SIC reo Sensor for Robot Vision," IEEE International Cunference on Robotics and Automalion. Vol. 2, pp, 1694-1699.2003,

[7] T, Schmitt. R. Hanck, M, Bcetz. S. Buck, B, Radig, "Cooperative Probabilistic State Estimation for Vision-Based Autonomous Mobile Robots," IEEE Transactiuns on RobrJl!c,. and Automalion, VoL 18, Issue, 5, pp, 670-684, 2002.

[8] URL: http://www.ccrise.com.tw/ [91 URL: hUp:llwww.fira,nct! [t 0] URL: http://www.mitsubishi.com/ [tl] URL: hUp:llwww.proc.coml [12] URL: hllp:l/www.robocup,orgi