there is a robotics laboratory in tianjin university of ... center fo…  · web viewalain lamber,...

13
Omni-vision based autonomous mobile robotic platform Zuoliang Cao *a , Jun Hu * a , Jin Cao **b , b , Ernest L. Hall ***c a Tianjin University of Technology, Tianjin / China; b Fatwire Corporation, New York /USA; c University of Cincinnati, Cincinnati/ USA ABSTRACT As a laboratory demonstration platform, TUT-I mobile robot provides various experimentation modules to demonstrate the robotics the technologies that are involved in remote control, computer programming, teach-and-playback operations . Typically, the teach-and-playback operation has been ap proved to be an effective solution especially in the structured environments. The path generated in the teach mode and path correction in ed by real-time using path error detecting in the playback mode are demonstrated. The vision-based image database is generated as the given path representation in the teaching procedure. The algorithm of an online image positioning is performed for path following. Advanced sensory capability is employed to provide environment perception. A unique omniddirectional omni directional vision (omni-vision) system is used for localization and navigation. The omnidirectional omni directional vision involves an extremely wide-angle lens, which has ve the feature that a dynamic omni-vision image is will be processed in real time to respond the widest view during the movement. The beacon guidance is realized by observing locations of points derived from over-head features such as predefined light arrays in a building. The navigation approach is based upon the omni-vision characteristics. A group of ultrasonic sensors is employed for obstacle avoidance. Keywords: Mobile robot, O mnidirectional omni directional vision, n N avigation, t T each-and-playback, i I mage database. 1

Upload: others

Post on 16-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: There is a robotics laboratory in Tianjin University of ... Center fo…  · Web viewAlain Lamber, and Nadine Le Fort-Piat, “Safe task planning integrating uncertainties and local

Omni-vision based autonomous mobile robotic platformZuoliang Cao*a, Jun Hu *a, Jin Cao**b ,b

, Ernest L. Hall***c

aTianjin University of Technology, Tianjin / China; bFatwire Corporation, New York /USA; cUniversity of Cincinnati, Cincinnati/ USA

ABSTRACT

As a laboratory demonstration platform, TUT-I mobile robot provides various experimentation modules to demonstrate the robotics the technologies that are involved in remote control, computer programming, teach-and-playback operations. Typically, the teach-and-playback operation has been approved to be an effective solution especially in the structured environments. The path generated in the teach mode and path correction ined by real-time using path error detecting in the playback mode are demonstrated. The vision-based image database is generated as the given path representation in the teaching procedure. The algorithm of an online image positioning is performed for path following. Advanced sensory capability is employed to provide environment perception. A unique omniddirectionalomni directional vision (omni-vision) system is used for localization and navigation. The omnidirectionalomni directional vision involves an extremely wide-angle lens, which hasve the feature that a dynamic omni-vision image iswill be processed in real time to respond the widest view during the movement. The beacon guidance is realized by observing locations of points derived from over-head features such as predefined light arrays in a building. The navigation approach is based upon the omni-vision characteristics. A group of ultrasonic sensors is employed for obstacle avoidance.

Keywords: Mobile robot, Omnidirectionalomni directional vision, nNavigation, tTeach-and-playback, iImage database.

1. INTRODUCTION

The robotics laboratory is a major research division of Tianjin University of Technology (TUT). It is equipped with an industrial robot work cell and a mobile robotic platform with a series of lab-demo system modules. The TUT mobile robotic platform, as a wheel-based demo system, was developed to adapt to current education and research needs. The emphasis of design concepts is placed on the following considerations.

(1) Mobile robotic devices promise a wide range of applications for industry automation in which stationary robots cannot produce satisfactory results because of limited working space. Such applications include material transfer and

*[email protected]; phone 86 22 23688585; fax 86 22 23362948; http://www.tjut.edu.cn; Tianjin University of Technology, Tianjin,

300191. China; **[email protected]; phone 1 516 3289473; fax 1 516739-5069; http://www.Fatwire.com; Fatwire Co. 330 Old

Country Road, Suite 207 Mineola, New York, USA 11501; ***[email protected]; phone 1 513 5562730; fax 1 513 5563390;

1

Page 2: There is a robotics laboratory in Tianjin University of ... Center fo…  · Web viewAlain Lamber, and Nadine Le Fort-Piat, “Safe task planning integrating uncertainties and local

http://www.uc.edu; Univ. of Cincinnati , Cincinnati, USA, OH 45242

tool handling in factories or automatically piloted carts for delivering loads such as mail around office buildings and service in hospitals. Most of the advanced manufacturing systems use some kind of computer transportation system.

(2) The facilities of the laboratory support the education of Robotics and Mechatronics which include a series of the primary and typical robotics courses such as Kinematics, Dynamics, Motion control, Robot programming, Machine intelligence.

(3) The laboratory experiments are the major resources for studying and researching the future technologies. Some particular application demonstrations are necessary to create basic concepts and approaches in new development areas to cope with the needs for research experience and capability.

Navigation tTechniques is one of the most common and important factors in robotics research. The use of a fixed guidance method is currently the most common and reliable technique. An alternative method uses a signal-carrying wire buried in the floor with on-board inductive coil or a reflective stripe painted on the floor with an on-board photoelectric device. In the laboratory, a magnetic strip is paved on the floor and the vehicle is equipped with a detecting sensor to direct the vehicle to follow it. Recent preferred approach employs laser beam reflecting or digital imaging units1, 2 to navigate a free-range vehicle.

It is feasible and desirable to use sensor-driven, computer controlled and software programmed automatic system for the AGV development. A series of special sensors may be employed for mobile robots. Various methods have been tested for mobile control using ultrasonic3, 4, infrared, optical and laser emitting units. The data fusion5, fuzzy logic6, 7, neural network5 and machine intelligence technologies8~10 have been developed.

Machine vision capability may have a significant effect on many robotics applications. An intelligent machine, such as autonomous mobile robot, must be equipped with vision system to collect vision information which is used to adapt to its environment. With regard to the limitation of the view scope, the current imaging system affects the performance of the robotics system. The development of the omnidirectionalomni directional vision system appears to have more advantages. The omnidirecitonal vision navigation program11, 12 is an example.

The mobile robotic platform, as a lab-demo system, is featured by its hybrid (in the sense of variety of actuators and sensors), reconfigurable, easily accessible, changeable controller and some advanced features. Both laboratory hardware modules and advanced computer software packages provide an appropriate environment supplemented by a graphics simulation system support. The two laboratory phases form an experimentation base from fundamentals of robot operation through integration with new control technologies. The advanced experiment is divided into two phases as following:

Phase one: robotic control, navigation, obstacle avoidance, sensory fusion.Phase two: computer vision, image processing, path tracking and planning, machine intelligence.

2

Page 3: There is a robotics laboratory in Tianjin University of ... Center fo…  · Web viewAlain Lamber, and Nadine Le Fort-Piat, “Safe task planning integrating uncertainties and local

TUT-I mobile robotic platform can be operated through four modes: human remote control, programmed control, teach-and-playback, and automatic path planning in structured environments. Typically, the main function is teach-and-playback operation for indoor transportation. The autonomous mobile robots can be considered as automated path planning, free-ranginge vehicle. However, the cooperation of autonomous capability with a supervising operator appears to be the engineering compromise that provides a framework for the applications of mobile robots especially in the structured environments. The operation mode of teaching path and playback is an effective solution not only for manipulator arms but also for vehicles. The vehicle records the beacon data progressively in an on-board computer memory during a manually given teaching mode trip along with a desired course. On subsequent unmanned trips, the vehicle directs itself along the chosen course by observing the beacon and comparing the data. The steering correction will allow the vehicle automatically to follow the taught course. The path generation by teaching mode and path correction by real-time path error detecting in the playback mode are demonstrated by the TUT-I robotic system. The vision-based image database is generated as the given path representation in the teaching procedure. The algorithm of the image processing is performed for path following. The path tracking involves an online positioning method.

Advanced sensory capability is employed to provide environment perception. A unique omni-vision system is used for localization and navigation. The OmnidirectionalOmni directional vision involves an extremely wide angle lens with a CCD camera, which has the feature that a dynamic omni-vision image canwill be responded to in real time. The beacon guidance is realized by observing locations of points derived from over-head features such as predefined light array in a building. The navigation approach is based upon the omni-vision characteristics. A group of ultrasonic sensor is employed to avoid obstacles. The other sensors are utilized for system reliability. The multi-sensory data fusion technique is developed for collision-free trajectory pilot.

TUT-I mobile platform can be decomposed into five distinct subsystems: locomotion with control/man-machine interface, sensors for obstacle avoidance, omni-vision module with image processor, navigator with central controller and power source units. The paper provides a concise series of descriptions in path planning, obstacle avoidance, control strategies, navigation, vision systems, ranging systems and various application modules.

2. CONFIGURATION OF THE OMNIDIRECTIONAL VISION NAVIGATION UNITS

The platform comprisesdiscloses an unmanned vehicle, which is comprised of two driven wheels and two front free swiveling wheels. The chassis contains the necessary elements to power, propel and steer other on-board equipment s. Near the chassis, a number of ultrasonic ranging sensors are mounted. The ultrasonic devices act to sense the presence of the objects within its path and prevent athe collision. As a further precaution, safety switches will stop the vehicle if it contacts anything.

OmnidrectionaOmni directionaln vision means that an entire hemispherical field of view is seen simultaneously. This is realized by means of an extremely wide-angle optical image device, called a fisheye, with a CCD camera. The omnidirectionan vision guidance is a new and unique navigation technique. The omni-vision appears to have a definite significance in navigation application for various autonomous guided vehicles. Omni-vision automated guiding system

3

Page 4: There is a robotics laboratory in Tianjin University of ... Center fo…  · Web viewAlain Lamber, and Nadine Le Fort-Piat, “Safe task planning integrating uncertainties and local

referred by overhead light consists of the following five components shown in Fig.1.(1) Fisheye lens, CCD camera with an automated electric shutter. An electronic shutter control system useAn electronic shutter control system uses a single chip microcomputer. It is suitable for TK-60 CCD camera. When the illumination light changes, the system still enables the camera have better output image.(2) A camera stand with 5 coordinates degrees of the freedoms.(3) Beacon tTracker: an image-processing computer for real-time data acquisitione of the targets.(4) Omni-image distortion corrector.(5) Navigator, which includes three function modules: Path Generator, Path Error Detector and Corrector.

System will perform path planning and tracking on the teach-and-playback mode under the environments by referring to overhead lights. The system guides itself by referring to overhead visual targets that are universally found in the buildings. A group of predefined overhead lights is usually selected as landmark, it is not necessary to install any special features. Since at least two points can determine the vehicle’s position and orientation, the beacon group should consist of at least two lights as guiding targets in each frame of image at least. However, the algorithms may be adjusted to handle any specified number of targets. The number must be greater than two. This would produce a more accurate and robust system. Even if the target may be lost from time to time, the vehicle would still have at least the minimum number of targets for guidance.

The tracker is real-time digital image processor. Although multiple targets can be multiplexed through a single target tracker, a multi-target tracker is used to eliminate the need for the multiplexing software. Up to four image windows or gates can be used to track the apparent movement of the overhead lights simultaneously. The beacon tracker is the major on-board hardware. It outputs the coordinates of a pair of lights that are shown on the monitor screen and placed a tracking gate around each of them.

3. TEACHING AND PLAYBACK OPERATION MODE BASED ON IMAGE DATABASE

4

Page 5: There is a robotics laboratory in Tianjin University of ... Center fo…  · Web viewAlain Lamber, and Nadine Le Fort-Piat, “Safe task planning integrating uncertainties and local

For industrial manipulators the teaching and playback operation mode through a teach pendant handled by operator is a traditional technique. For an automated guided vehicle, it appears to be an unusual method but still to be an effective solution. The vision-based image database is employed as the desired path generator. In the teaching mode, the operator

manually causes the vehicle to move forward along the desired path at a selected speed. The selected targets with their gates will move downward on the monitor screen as the vehicle passes underneath the overhead lights. Then the datum is recorded as described in the section in the reference database. When a target is nearly out of the field of view, the frame will be terminated and a pair of targets will be selected. The process will be repeated. Some combination of vision frames and position tracking will continue until the desired path has been generated and the reference data for that track is recorded in memory. In the playback mode thereafter, the vehicle automatically maintains itself in the desired path by comparing between the analogous record and the vehicle path of movement and steering corrections made to bring the path errors: Ex, the lateral error ,error, Ey, the error along the path and Ea, the angle orientation error to zero, thereby keeping the vehicle on the intended course. As the value of Ey diminishes and approaches to zero the next reference frame will be called up. The cycle will be repeated. The procedure will cause the vehicle to follow the desired path.

A database with the tree structure shown in Fig.2 is built up to record the data stream. The vision-based image database is three-layer data structure. The data structure is a group of array pointer. Each array element points the next layer’s node. The array index means the serial number of the sampling, which represents frames, fields and records respectively. The method creates an exclusive property that the desired path is defined by an image database.

There are three coordinate systems: the beacon coordinate system, the vehicle coordinate system and the image coordinate system. On the basis of the principle of the coordinate conversion, the path errors can be easily calculated and obtained when the vehicle departs from the desired path as detected from the recorded locations of geometric points derived from the various elements of the existing pattern. At this point it is possible to compare the observed target coordinates (Tx, Ty) with the reference target coordinates (Rx, Ry) and determine the angle error Ea, the lateral position error Ex and the longitudinal position error Ey.

4. LENS DISTORTION AND PIXEL CORRECTION

For vision-based measurement, in order to obtain the real world position of the vehicle the lens distortion and camera chip pixel distortion correction have to be considered. The target image coordinates supplied by the video tracker require correction for two inherent errors before they can provide accurate input for guiding the vehicle. One is lLens distortion error and another is the pixel direction error.

5

Page 6: There is a robotics laboratory in Tianjin University of ... Center fo…  · Web viewAlain Lamber, and Nadine Le Fort-Piat, “Safe task planning integrating uncertainties and local

The wide angle lens used in this system has considerable distortion and consequently the image coordinates of the overhead lights as reported by the tracker are not proportional to the true real world coordinates of those lights. The lens distortion will cause the image range Ir vary as a function of the zenith angle B and depending on the lens used, the function may be linear, parabolic or trigonometric. The distortion is determined experimentally by laboratory measurements from which the function is determined.The lens used herein the distortion correction refers to a linear formula as shown in Fig.3

Ir = KB

Where K is a constant derived from the distortion measurement. The correction factor for a specific lens distortion is provided by the lens correction software.

The pixels in the CCD chip are different scale between X and Y direction. This causes the targets coordinates to be different in X and Y axis. It is necessary to divide the Y coordinates by a factor R which is the ratio of pixel length toheight. This brings the X and Y coordinates to the same scale. The following equations give this conversion:

dx = (xi – xc)dy = (yi – yc)/R

where (xc, yc) is origin coordinates of the camera center. Then we know the image range Ir. The image coordinates (dx, dy,) on focal plane are converted into the target coordinates (Tx, Ty,) at the known height H. show in Fig .3.

Tr = H tan BIt is known that B = Ir / K

Tr = H tan (Ir / K)Where Tr is the target range in real world coordinate system.Since the azimuth angle C to ana target point is invariant in the image and real world coordinates system. The target world coordinates ( Tx(Tx, Ty, ) can be calculated as:

Tx = dx Tr/IrTy = dy Tr/Ir

In order to transfer the image coordinates from an origin at the camera centerline to the vehicle centerline, the calibration of the three coordinates as discussed above is necessary to determinate the center point of the coordinates.

5. THE MOTION CONTROL MODULES

A programmable two-axis motor controller is used. A non-programmable model may be chosen. The necessary programming modules can be incorporated into the vehicle computer. The controller must selectively control velocity or

6

Page 7: There is a robotics laboratory in Tianjin University of ... Center fo…  · Web viewAlain Lamber, and Nadine Le Fort-Piat, “Safe task planning integrating uncertainties and local

position of two motors, utilize appropriate feedback, such as from optical encoders and coupled tachogenerators to either the motor or the wheels, which involves a two freedom closed-loop servo system.

The vehicle has two self-aligning front wheels and two driving rear wheels. Velocity mode control is used when the vision system is guiding the vehicle. In this mode, steering is accomplished by the two driving motors turning at different speeds due to the inputs of VL and VR (left and right wheel velocities) from the computer. The wheel encoders provide local feedback for the motor controller to maintain the velocity profile programmed during the teaching mode.

By speeding one wheel and slowing another one equally by an amount dV, the motor control strategy steers the vehicle to return to its desired path. Since the sample frequency is large enough, we can consider the system as a continuous feedback system. In practice, the conventional PID compensator can be designed to achieve desired performance specifications. A simple control formula is used as followings:

dV = k1 Ex + k2 Ea

where Ex and Ea are output from the path error measurement circuit. K1 and k2K2 are the constraints which can be mathematically calculated from significant parameters of vehicle dynamics and kinematics or be determined experimentally.

VL = Vm + dVVR = Vm – dV

where Vm is velocity at the centerline of the vehicle. The sign of dV and the magnitude of dV will determine the turning direction by giving a turn radius. The control formula will bring the vehicle back onto course in the shortest possible time without overshooting. The following block diagram represents the closed-loop transfer function. For the output Ex, it involves a PD compensation shown in Fig.4.

Where D is the diameter of the rear drive wheel, w is the distance between two bear wheel, k n is a constant related to motor output, V0 is the velocity of the mass center of the vehicle. The right diagram is a simplified unit feedback system derived from left diagram. We could select a pair of the optimal values of K1 and K2 through calculation or experiments

7

Page 8: There is a robotics laboratory in Tianjin University of ... Center fo…  · Web viewAlain Lamber, and Nadine Le Fort-Piat, “Safe task planning integrating uncertainties and local

Fig. 5: TUT-I Omni-vision based autonomous mobile robotic platform

in practice to result in a desired performance of the system.

8

Page 9: There is a robotics laboratory in Tianjin University of ... Center fo…  · Web viewAlain Lamber, and Nadine Le Fort-Piat, “Safe task planning integrating uncertainties and local

6. CONCLUSION

The photographs of the TUT-I omni-vision based autonomous mobile robotic platform are shown in Fig.5. The technique points of the application for omni-vision based autonomous mobile robotic platform include following properties:

(1) The omni-vision provides an entire scene using a fish eye lens. It appears useful in a variety of applications for robotics. An overall view is always required for safe and reliable operation of a vehicle. While conventional method with camera scanning appears generically deficient, dynamic omni-vision is considered as a definite advantage particularly for mobile navigation.

(2) The preferred approach is that an unmanned vehicle guides itself by referring to overhead visual targets such as predefined overhead lights. Overhead lightsSince they are universally found in structured environments and not easilyy to be blocked by floor obstacles. The guidance system does not require the installation of any special equipment in the work area. The point matrix pattern of the beacon as an environment map is very simple to be processed and understood.

(3) The teach-and-playback operation mode seems appropriate not only for robotic manipulator but also for other vehicles as well. Vision-based image database, as a teaching path record or a desired path generator, create a unique technique to expand robot capability.

ACKNOWLEDGMENTS

The author gratefully acknowledges the support of K.C.Wong education foundation, Hong Kong.

REFERENCES

1. Xiaoqun Liao, Jin Cao, Ming Cao, Tayib Samu, and Ernest Hall, “Computer vision system for an autonomous mobile robot,” Proc. SPIE Intelligent Robots and Computer Visioninternational Cconference, November, Boston, 1998.

2. E.L. Hall, “Fundamental principles of robot vision,” Handbook of Ppattern Rrecognition and Iimage Pprocessing: Computer Vvision, Academic Press, New York, pp.543-575, 1994.

3. Qing-ha0 Meng, Yicai Sun, and Zuoliang Cao, “Adaptive extended Kalman filter (AEKF)-based mobile robot localization using sonar,” Robotica, Vol.18, pp.459-473.2000

4. Gordon Kao, and Penny Probert, “Feature extraction from a broadband sonar sensor for mapping structured environments efficiently,” The International Journal of Robotics Research, Vol.19, No.10, pp.895-913, 2000.

5. Minglu Zhang, Shangxian Peng, and Zuoliang Cao, “The artificial neural network and fuzzy logic used for the

9

Page 10: There is a robotics laboratory in Tianjin University of ... Center fo…  · Web viewAlain Lamber, and Nadine Le Fort-Piat, “Safe task planning integrating uncertainties and local

avoiding of mobile robot,” China mechanical engineering, vol.18, pp.21-24,1997, 1997. 6. Hong Xu, and Zuoliang Cao, “A three-dimension-fuzzy wall-following controller for a mobile robot”, ROBOT,

Vol.18, pp.548-551 1996. 7. T.I Samu, N. Kelkar, and E.L. Hall, “Fuzzy logic system for three dimension line following for a mobile robot,”

Proc. of Adaptive distribute parallel computing symposium, Dayton, Oh. pp137-148, 1996. 8. Zuoliang Cao “Region filling operations with random obstacle avoidance for mobile robot,” Journal J.of Robotic

Systemes, 5(2), pp.87-102,1998, 1998. 9. Zvi Shiller, “OlineOnline suboptimal obstacle avoidance,” The International Journal of Robotics Research, Vol.19,

No.5, pp.480-497, 2000. 10. Alain Lamber, and Nadine Le Fort-Piat, “Safe task planning intergratingintegrating uncertainties and local maps

federations,” The International Journal of Robotics Research, Vol.19, No.6, pp.597-611, 2000. 1211. Liming Zhang, and Zuoliang Cao, “Teach-playback based beacon guidance for autonomic guided

vehicles ,vehicles,” J.ofJournal of Tianjin I institute of T technology, Vol.12 No.1, pp.28-31,1996, 1996. 112. Liming Zhang, Zuoliang Cao, “Mobile path generating and tracking for beacon guidance,” 2nd, Asian Conference

on Robotics, 1994. 13. Alain Lamber, and Nadine Le Fort-Piat, “Safe task planning intergratingintegrating uncertainties and local maps

federations,” The International Journal of Robotics Research, Vol.19, No.6, pp.597-611, 2000.

Fig. 5: TUT-I Omni-vision based autonomous mobile robotic platform

10