bridge inspection robot system with machine visionvisionlab.hanyang.ac.kr/wordpress/web_paper/bridge...

13
Bridge inspection robot system with machine vision Je-Keun Oh a , Giho Jang a , Semin Oh a , Jeong Ho Lee a , Byung-Ju Yi a , Young Shik Moon a , Jong Seh Lee b , Youngjin Choi a, a Division of Electrical Engineering and Computer Science, Hanyang University, Ansan, 426-791, Republic of Korea b Division of Construction and Transportation Engineering, Hanyang University, Ansan, 426-791, Republic of Korea abstract article info Article history: Accepted 23 April 2009 Keywords: Bridge inspection Robot system Machine vision System integration A robotic system for inspecting the safety status of bridges is proposed in this paper. Currently, most bridge inspections have been done manually by counting the number of cracks, measuring their lengths and widths and taking pictures of them. Thus the quality and reliability of diagnosis reports on bridges are greatly dependent upon the diligence and education of inspection workers. The robotic inspection system to be proposed consists of three parts: a specially designed car, a robot mechanism and control system for mobility and a machine vision system for automatic detection of cracks. Ultimately, this robot system has been developed for gathering accurate data in order to record the biennial changes of the bridge's safety circumstances as well as check the safety status of bridges. We also demonstrate the effectiveness of the suggested crack detecting and tracing algorithms through experiments on a real bridge crack inspection. © 2009 Elsevier B.V. All rights reserved. 1. Introduction Bridge inspection is a critical responsibility. Failure to properly inspect bridges has resulted in abrupt bridge collapses; for instance, the Songsu Bridge collapse in Korea in 1994 and the Minneapolis Bridge collapse in the US in 2007. The number of bridges has gradually increased in Korea resulting in increased need for inspections, as shown in Table 1 . Currently; about 12,000 bridges should be inspected every other year for safety tests according to Korean law [1]. Also, the maintenance and repair cost of bridges has rapidly increased every year with an increase of 200 times in the last 10 years since 1995. Thus it is increasingly necessary to nd a more efcient and economical method to maintain the bridges through precise diagnosis. Although robot technologies have evolved in a variety of industrial areas, robot application technologies for the safety diagnosis and maintenance of real bridges have lagged behind. Until recently, bridge inspection and maintenance have been manually conducted by the trained inspection workers working in the outdoors [2,3]. The inspection workers check the safety status beneath the bridge by counting the number of cracks, measuring the maximum widths and lengths of the crack lines and taking pictures of them. Thus the accuracy and quality of diagnosis report becomes subjective and results differ according to the diligence of the inspection workers. Also, since the bridge inspection is performed outdoors, especially beneath the bridge, there may be the problem concerning the safety of inspection workers. The Fig. 1 shows the inspection workers standing on a temporary scaffolding in order to inspect the safety status of a real bridge [46]. An industrial accident may be caused during the period of the bridge inspection as shown in Fig. 1. Besides cost reduction, improvement of the work environment has become one of the main considerations in the bridge inspection. To help solve these problems, the bridge inspection robot system equipped with machine vision is proposed in this paper. An inspection robot is more useful when it can dispatch sensors or manipulators into inaccessible or hazardous areas, thereby, making the inspection workers safer. As similar applications, an underwater inspection robot system for the bridge piers has already been developed in [7], also, other types of inspection robot have been suggested such as a climbing robot composed of wheel mechanism with vacuum suctions in [810], a pipe inspection robot for magnetic crack detection of iron pipes in [11] and a mobile robot for bridge girder inspection in [12]. For visual inspections, we have developed not only a robotic motion control system but also a machine vision system. During the last decades, machine vision has evolved into various elds embracing a wide range of applications including surveillance, automated inspection and vehicle guidance [13,14]. The machine vision system is able to extract useful information about a scene from its two-dimensional Automation in Construction 18 (2009) 929941 Corresponding author. Tel.: +82 31 400 5232; fax: +82 31 436 8156. E-mail addresses: [email protected] (J.-K. Oh), [email protected] (G. Jang), [email protected] (S. Oh), [email protected] (J.H. Lee), [email protected] (B.-J. Yi), [email protected] (Y.S. Moon), [email protected] (J.S. Lee), [email protected] (Y. Choi). 0926-5805/$ see front matter © 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.autcon.2009.04.003 Contents lists available at ScienceDirect Automation in Construction journal homepage: www.elsevier.com/locate/autcon

Upload: others

Post on 11-Mar-2020

25 views

Category:

Documents


0 download

TRANSCRIPT

  • Automation in Construction 18 (2009) 929–941

    Contents lists available at ScienceDirect

    Automation in Construction

    j ourna l homepage: www.e lsev ie r.com/ locate /autcon

    Bridge inspection robot system with machine vision

    Je-Keun Oh a, Giho Jang a, Semin Oh a, Jeong Ho Lee a, Byung-Ju Yi a, Young Shik Moon a,Jong Seh Lee b, Youngjin Choi a,⁎a Division of Electrical Engineering and Computer Science, Hanyang University, Ansan, 426-791, Republic of Koreab Division of Construction and Transportation Engineering, Hanyang University, Ansan, 426-791, Republic of Korea

    ⁎ Corresponding author. Tel.: +82 31 400 5232; fax:E-mail addresses: [email protected] (J.-K. Oh),

    (G. Jang), [email protected] (S. Oh), [email protected]@hanyang.ac.kr (B.-J. Yi), [email protected] ([email protected] (J.S. Lee), [email protected] (Y.

    0926-5805/$ – see front matter © 2009 Elsevier B.V. Adoi:10.1016/j.autcon.2009.04.003

    a b s t r a c t

    a r t i c l e i n f o

    Article history:Accepted 23 April 2009

    Keywords:Bridge inspectionRobot systemMachine visionSystem integration

    A robotic system for inspecting the safety status of bridges is proposed in this paper. Currently, mostbridge inspections have been done manually by counting the number of cracks, measuring their lengthsand widths and taking pictures of them. Thus the quality and reliability of diagnosis reports on bridgesare greatly dependent upon the diligence and education of inspection workers. The robotic inspectionsystem to be proposed consists of three parts: a specially designed car, a robot mechanism and controlsystem for mobility and a machine vision system for automatic detection of cracks. Ultimately, this robotsystem has been developed for gathering accurate data in order to record the biennial changes of thebridge's safety circumstances as well as check the safety status of bridges. We also demonstrate theeffectiveness of the suggested crack detecting and tracing algorithms through experiments on a realbridge crack inspection.

    © 2009 Elsevier B.V. All rights reserved.

    1. Introduction

    Bridge inspection is a critical responsibility. Failure to properlyinspect bridges has resulted in abrupt bridge collapses; for instance,the Songsu Bridge collapse in Korea in 1994 and the MinneapolisBridge collapse in the US in 2007. The number of bridges hasgradually increased in Korea resulting in increased need forinspections, as shown in Table 1. Currently; about 12,000 bridgesshould be inspected every other year for safety tests according toKorean law [1]. Also, the maintenance and repair cost of bridges hasrapidly increased every year with an increase of 200 times in the last10 years since 1995. Thus it is increasingly necessary to find a moreefficient and economical method to maintain the bridges throughprecise diagnosis.

    Although robot technologies have evolved in a variety of industrialareas, robot application technologies for the safety diagnosis andmaintenance of real bridges have lagged behind. Until recently, bridgeinspection and maintenance have been manually conducted by thetrained inspection workers working in the outdoors [2,3]. Theinspection workers check the safety status beneath the bridge bycounting the number of cracks, measuring the maximum widths and

    +82 31 436 [email protected] (J.H. Lee),Y.S. Moon),Choi).

    ll rights reserved.

    lengths of the crack lines and taking pictures of them. Thus theaccuracy and quality of diagnosis report becomes subjective andresults differ according to the diligence of the inspection workers.Also, since the bridge inspection is performed outdoors, especiallybeneath the bridge, there may be the problem concerning the safety ofinspection workers. The Fig. 1 shows the inspection workers standingon a temporary scaffolding in order to inspect the safety status of areal bridge [4–6].

    An industrial accident may be caused during the period of thebridge inspection as shown in Fig. 1. Besides cost reduction,improvement of the work environment has become one of the mainconsiderations in the bridge inspection. To help solve these problems,the bridge inspection robot system equipped with machine vision isproposed in this paper. An inspection robot is more useful when it candispatch sensors or manipulators into inaccessible or hazardous areas,thereby, making the inspection workers safer. As similar applications,an underwater inspection robot system for the bridge piers hasalready been developed in [7], also, other types of inspection robothave been suggested such as a climbing robot composed of wheelmechanism with vacuum suctions in [8–10], a pipe inspection robotfor magnetic crack detection of iron pipes in [11] and a mobile robotfor bridge girder inspection in [12].

    For visual inspections, we have developed not only a robotic motioncontrol system but also a machine vision system. During the lastdecades, machine vision has evolved into various fields embracing awide range of applications including surveillance, automated inspectionand vehicle guidance [13,14]. The machine vision system is able toextract useful information about a scene from its two-dimensional

    mailto:[email protected]:smile.�[email protected]:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]://dx.doi.org/10.1016/j.autcon.2009.04.003http://www.sciencedirect.com/science/journal/09265805

  • Table 1The number of bridges in Korea [1].

    Year 1970 1980 1990 2000

    # of bridges 9000 12,000 13,000 15,000

    Fig. 2. Overview of total mechanism for bridge inspection.

    930 J.-K. Oh et al. / Automation in Construction 18 (2009) 929–941

    projections. The machine vision system takes images as inputs andproduces other types of outputs, such as crack lengths, crackwidths andan outlined sketch of the bridge status. By using the machine visionsystem, the accuracy of crack assessment is guaranteed and a variety ofinformation for bridge maintenance is provided through the results ofbridge inspection. In contrast to most existing crack detection methodsin [19–21] that find and display detected cracks, this paper introducesbatch processes including the crack detecting/tracing algorithms andthe corresponding crack panorama images conversion into CAD files fora bridge management system. In practice, crack progress speed shouldbe estimated through biennial reports on the bridge inspection, thus thebatchprocesses to be suggested are essentially required for assessing thesafety of bridge. In this paper, we aim at raising the consistency ofinspection results, improving the exactness and reliability of biennialdiagnosis report about the bridge and reducing the industrial accidents.Also, the bridge inspection robot system to be suggested is not fullyautonomous, but semi-autonomous type requiring the role of a humanas a supervisor or operator.With these research objectives, we suggest arobotic system for detecting and tracing cracks with a machine visionsystem and a variety of sensors.

    This paper is organized as follows: Section 2 describes the totalrobot mechanism and control system to be proposed; Section 3suggests the image processing algorithms for detecting and tracingcracks; Section 4 explains an integration method for the total system;Section 5 shows the experimental results; and Section 6 draws theconclusion.

    2. Robot mechanism and control system

    2.1. Overview

    The totalmechanical system for the bridge inspection is composed ofa specially designed car and the inspection robot which is mounted onan end-point of a specially designed car as shown in Fig. 2. The speciallydesigned car has themulti-linkagemechanism of sevenDOF's (degrees-of-freedom) equipped with the hydraulic actuators system. This multi-linkage mechanism was designed for dispatching the inspection robotsystem into the lower surface of bridge. Also, the inspection robotmounted on the end-pointofmulti-linkage systemwasdesigned tohavethree DOF's mechanism equipped with the electric actuators (motors)system,machine vision system and various sensors. Themachine vision

    Fig. 1. Inspection workers standing on temporary scaffolding [5].

    system mounted on the end-point of the inspection robot is able todetect and find the cracks beneath the bridges. Since themain objectiveof multi-linkage system is to dispatch the inspection robot to theposition to be inspected, it has a wide workspace (about 8×4 m2 as asagittal plane with 180° rotation) as shown in Fig. 3.

    Also, since the last link of multi-linkage is able to be stretched to amaximum12m from the initial 4m,wemust consider theweight of theinspection robot as a tip mass to reduce the vibration and deflectionwhichmay be caused in a cantilever beam type. First, as shown in Fig. 4,the maximum deflection of simple cantilever beam with tip mass isobtained by the superposition of deflections by tip mass (P[kg]) and byuniformly self-load distribution (w[kg/m]) in [29] as follows:

    ymax =PL3

    3EI+

    wL4

    8EIð1Þ

    where E is the modulus of elasticity, in the case of steel E=2.15×1010 kg/m2, L is the length, here L=12m, and I is the moment of inertiaof area defined by I = 112 BD

    3 − bd3� �

    and the self-load distribution isdefined byw=Aρ=(BD−bd)ρwith the cross-sectional area A and thesteel density ρ=7860 kg/m3. Second, the natural frequency equationfor cantilever beam are expressed by

    ωn = βnLð Þ2ffiffiffiffiffiffiffiffiffiffiffiEI

    ρAL4

    sð2Þ

    and the natural frequencies are summarized using above given data asshown in Table 2.

    After solving the deflection Eq. (1) and natural frequency Eq. (2)simply using the geometric design data of Fig. 4, we were able toobtain the deflection by self-load as 0.09364 m and first-mode ofnatural frequency as 0.6465 Hz. Here, if the weight of the inspection

    Fig. 3. Workspace of multi-linkage system of specially designed car.

  • Fig. 4. Deflection of last link of multi-linkage mechanism.

    931J.-K. Oh et al. / Automation in Construction 18 (2009) 929–941

    robot as a tip mass is designed to be less than 30 kg, then thedeflection is at most added 0.0176 m to the self-load deflection of0.09364 m. Here, we determined that the weight of the inspectionrobot system should be designed for less than 30 kg. Thus we havedesigned the last link under the assumption of a maximum tip-massof 30 kg so that the static deflection of a multi-linkage system remainsabout 0.11064 m and the first resonant mode appears within thecontrollable low frequency, here, about 0.6465 Hz. In addition,according to this design specification, we have designed the inspec-tion robot as light as possible, to be about 20 kg.

    The multi-linkage system of the specially designed car can be foldedor unfolded by both manual and automatic methods; the manualmethod uses the control sticks to operate the multi-linkage systemsimilarly with a conventional excavator and then the real joint motiondata of multi-linkage system are acquired through the SCI communica-tion channel. The acquired data is utilized for automatic mode inrepeating same unfolding procedures. Hence, the automatic one makesuse of the digital motion commands through the communicationchannel to control the motions of multi-linkage system. As shown inFig. 5, themulti-linkage system is operated by each hydraulic actuator ofseven links. According to the operation order, the first joint actuator liftsup the multi-linkage from the head of the specially designed car. Thesecond, third and fourth joint actuators rotate the multi-linkage to theside of the bridge. The fifth prismatic joint actuator fits the distancebetween the bridge and the inspection robot so that the inspection robotcan detect cracks easily because each bridge has its own thickness. Thesixth rotary joint actuator and seventh prismatic joint actuatormake therobot inspect beneath the bridge move as widely as possible. Theworkspace as shown in Fig. 3 consists of the working distances of fifthand seventh prismatic joints and the working rotation of sixth rotaryjoint. Also, in order to solve the forward kinematics of seven degrees-of-freedom multi-linkage system, the second one should be first and thefirst one should be second, because the second joint is attached to thebase platform before the first joint. In addition, the simple proportionalcontrollers were utilized for the hydraulic actuator systems. Themechanism details and control methods of the inspection robotmounted on the end-point of the multi-linkage system are suggestedin the following section.

    2.2. Mechanism and control of inspection robot

    The inspection robot consists of rotary jointmechanisms for pan/tiltmotions of machine vision system and a prismatic (up/down) joint

    Table 2Natural frequencies of cantilever beam.

    Mode βnL ωn [rad/s] Frequency [Hz]

    1st mode 1.875104069 4.06230631 0.646536132nd mode 4.694091133 25.4581491 4.051790283rd mode 7.8754757438 71.6600966 11.4050586

    mechanism for the gravitational directionmovement as shown in Fig. 6.Also, the pan and tilt mechanisms are equipped with a gyro sensor, alaser sensor and a camera (machine vision) system. The up/downmovement is achieved through a two-stage mechanism workedtogether by a tendon-driven method as shown in Fig. 7. The two-stagemechanismwas devised to approach the machine vision system to thebridge's lower surface as closely as possible through its further extension,here, the maximum upstroke becomes about 1 m. This large upstroke isfor capturing the precise crack images beneath the bridge, ultimately, tocalculate both lengths and widths of the cracks.

    The environmental conditions under real bridges are neitherconstant nor controllable, for example, whenever a car passes on abridge, the dynamic shocks as the disturbance are transmitted to themulti-linkage system and they cause structural vibrations of themulti-linkage system with its own natural frequencies. The dominantresonant frequency of multi-linkage system is designed to be lowerthan 1 Hz (about 0.6465 Hz) explained in the previous section,nevertheless, it may cause the captured images to be blurred. In orderto overcome these disadvantages brought by unknown disturbances,we attach both a laser sensor and a gyro sensor to the end-point ofinspection robot for measuring the exact distance between themachine vision system and the bridge lower surface and for acquiringthe absolute orientations of the camera system, respectively, as shownin Fig. 6. The laser sensor is utilized for keeping the constant distancebetween camera and bridge lower surface by using the gravitationaldirectionmechanism against unknown disturbances. Besides prevent-ing blurred images, themachine vision systemmust keep the constantorientations expressed in absolute coordinates, for it should capturethe cracks image at a front view to calculate their lengths and widthsthrough the corresponding pixel counting. The gyro sensor is utilizedfor keeping constant roll (pan) and pitch (tilt) orientations by usingpan/tilt mechanisms. For keeping the constant absolute orientations,the pan/tilt motions should be controlled as fast as possible. For this,we selected the 6400 rpm DCmotors and the harmonic transmissionswith 100:1 gear ratio for pan/tilt motions as suggested in Table 3, soeach directional actuator has the speed capability of 1.07 rev/s.

    As mentioned, we have designed the up/down mechanism withtwo stages for capturing the precise cracks image by approaching themachine vision to the target point as shown in Fig. 7. The two-stagemechanism is efficient to reduce the weight and the number ofactuators of the inspection robot. Thus we could lift a machine visionsystem just with 1 BLDC motor and a tendon-driven winding systemmaximally to about 1 m. For this, we selected a 5450 rpm BLDCmotor,a gear transmissionwith a 20:1 ratio and tendonwinding systemwitha pulley diameter of 0.034 m for gravitational direction motion asshown in Table 3. Ultimately, the up/down actuator has a speedcapability of 0.97 m/s. Fig. 7 shows the motion principle of the twostages mechanism operated by one BLDC motor. The mid plate hasthree fixed pulleys. The motor winds or releases the reel (motorpulley) with a tendon to lift themid plate up or down. As themid plateis lifted up or down by the driving motor, the two side pulleys fixed at

  • Fig. 5. Movements of multi-linkage system according to operation order.

    932 J.-K. Oh et al. / Automation in Construction 18 (2009) 929–941

    the mid plate lift the camera pan/tilt mechanism up or down at twotimes velocity simultaneously. Also, when the driving motor winds orreleases the driving tendon, the safety wire is oppositely released orwound at the same time in order to prevent the detachment of pan/tilt mechanism from the two stage mechanism.

    Next, the control algorithms for the pan and tilt motions and up/down motion are described. The proposed control algorithms areshown in Fig. 8; the nonlinear PD (proportional plus derivative)controller is used for the fast pan/tilt motions and the conventional

    Fig. 6. Bridge inspection

    PID (proportional plus integral plus derivative) controller for theprecise up/down motion in [15,16], respectively.

    Firstly, let us define the linear error of pan motion in Fig. 8 as thefollowing form:

    epan = qd;pan − qpan; ð3Þ

    where qd,pan is the desired pan angle and qpan the real pan anglemeasuredby the gyro sensor. Sincewe require the fast pan/tiltmotions

    robot mechanism.

  • Fig. 7. Motion principle of two stages mechanism.

    Fig. 8. Block diagram of motion control system.

    933J.-K. Oh et al. / Automation in Construction 18 (2009) 929–941

    for keeping the absolute constant orientations of machine visionsystem against the unknown disturbances, let us introduce newnonlinear error using Eq. (3) as the following form:

    xpan = epan jepan j ; ð4Þ

    where the nonlinear error has twomeanings; it has a larger value thana linear error for a large error and it has a smaller value for a small error.And then we apply the PD controller to the nonlinear error of Eq. (4).This is referred to as a nonlinear PD controller in this paper. If the linearerror is large, then nonlinear PD controller shows the fast pan motionto recover the desired pan angle; if the linear error is small, then thenonlinear PD controller shows the slow motion for preventing thecamera image frombeing blurred owing to the fast cameramovement.Using this nonlinear PD controller, we can get more effective cameraimages than using just the linear PD controller. The same controlscheme is also applied to the tilt motion controller.

    Secondly, the linear (conventional) PID controller is utilized for thegravitational direction (up/down)motion as shown in Fig. 8. Here, theintegral control action was added for compensating the gravitationalforce. The constant bounded disturbance such as a gravitational force/torque can be overcome by using the integral control action asreported in [17]. The objective of the PID controller is to keep theconstant distance between the bridge lower surface and machinevision system, ultimately for capturing the precise cracks image,against unknown disturbances. In order to realize the suggestedcontrol system, we have developed a motion control board using theDSP F2812 (manufactured by TI Co.) as a microprocessor that is able toprocess the data in 150 MHz. This microprocessor has various anduseful peripherals such as the CAN, SCI, ADC. In addition, it is necessaryfor a server computer to operate the entire motion control system. Theinspection worker is able to watch the motion status of inspectionrobot and to operate it through the server computer since the servercomputer is tightly connected to the motion control board of F2812through the CAN communication. The server computer sends themotion commands to the motion control board and receives themotion data from the motion control board. In addition, the servercomputer controls the motions of multi-linkage system of a speciallydesigned car through the SCI communication. Using the suggested

    Table 3Actuator specifications used in bridge inspection robot.

    Pan Tilt Up/down

    Motor (Faulhaber) 4490H-024 4490H-024 2642W-0246400 rpm 6400 rpm 5450 rpm23.2 W 23.2 W 207 W

    Gear ratio 100:1 100:1 20:1

    motion control system, the machine vision system is able to capturethe clean crack image and calculate the length and width of thecorresponding crack. These will be suggested in the following section.

    3. Machine vision system

    The purpose of the machine vision system is to detect the cracks ofbridge lower surface automatically from the captured images. As amatter of fact, there aremany kinds of damages according to the bridgetypes, for example, cracks, corrosions, subsidence, fatigue. Amongthese damages, crack information becomes one of the most importantfactors in deciding the bridge repairs [18]. The utilized machine visionsystem is composed of a charged couple device (CCD) camera, a digitalvideo recorder (DVR) board and a vision processing program on theserver computer. In order to determine the specifications on themachine vision system, we have considered its weight, the suppliedelectric power, how to communicatewith the server and its cable type.Also, the focal length of CCD camera is remotely controlled through theRS485 communication. Therefore, the machine vision system has theentire mobility composed of seven DOF's multi-linkage system for amacromotion, three DOF's inspection robot system for amicromotionand one DOF focal length control system through the server computer.

    The entire image processing algorithms to find cracks from thecaptured images are sequentially suggested as shown in Fig. 9. Aftercapturing the images, the pre-processing for the image enhancementthrough the removal of artifacts is implemented using the medianfilters. And then both the crack detection and tracing algorithmscalculate both lengths and widths of the corresponding cracks. Thepost-processing for a synthesized panorama image through imagesstitching is implemented, finally, the synthesized panorama imagesincluding the long crack information are converted into a data formatrequired for the database of bridge management system. Ultimately,the database is biennially updated to find out the crack progressspeed. Currently, the entire image processing takes 200 ms; in otherwords, the machine vision system captures the images in 5 frame/s.More detailed explanations will be given in the following sections.

    3.1. Crack detection and tracing

    There have been a few crack detectionmethods in [19–21]; most ofexisting methods simply display the detected cracks; however, forassessing the safety of bridge, the crack progress speed should beestimated through the biennial report on the bridge inspection. Forthis, the information about the crack lengths and widths should begathered and accumulated every other year. Normally, in the crackdetection, there are some problems such as irregularities in crackshapes and sizes, various soiled and painted surfaces, and irregularlyilluminated conditions owing to the support beams between the spansbeneath the bridge. These may cause serious problems in realizing the

  • Fig. 9. Entire image processing algorithms.

    Fig. 11. Crack tracing algorithm.

    934 J.-K. Oh et al. / Automation in Construction 18 (2009) 929–941

    automatic crack detection. To overcome of these problems,we proposethe crack detection algorithm and the crack tracing one separately as atwo steps method for the automatic crack image processing.

    As a first step (crack detection) for the automatic crack imageprocessing, we perform the following three procedures for extractingthe candidates for real cracks from the captured image. First, we obtainthe smoothed image from the original image by using a median filter[21] and thenwe subtract the smoothed (ormedianfiltered) image fromthe original image as shown in Fig. 10. Through these procedures, wefind the candidates for real cracks. Another purpose of these processes isto maintain a uniform brightness throughout the image and toeffectively detect cracks in the shadowed environment. Secondly, weremove artifacts from the images of bridge surfaces using a filter forremoving isolated candidate points for cracks. Through this process, weare able to reduce the candidate cracks and unnecessary search time.Thirdly, we apply morphological operations [22,23] such as dilation andthinning to the images for guaranteeing the connection between cracksegments, inwhich the number of iterations is determined according tothe distribution of candidates for cracks. Through these procedures, theconnected cracks are acquired from the original image.

    As a second step (crack tracing) for the automatic crack imageprocessing, we divide the image with connected cracks into severalregions and select a seed point in each region. Each seed point that hasthemaximumprobability of being a crack in the region is selected. Froma seed point as a starting one, the cracks are traced bi-directionally. Foreach seed point, we examine the intensities of 8-neighbor pixels todetermine the directions of the next 2 pixels withminimum intensities.After determining two directions from a seed point, the crack tracingalgorithm is applied as shown in Fig. 11.(a). Here, we determine newnext pixel which has the lowest intensity among 8-neighbor pixelsexcept the previously chosen one using the following form:

    pn = min intensity of Pið Þ; for i = 1;2;: : :;8 : ð5Þ

    In this process, the selecteddirection is replacedwithDn, that is, fromPn−1 to Pn. To avoid local minima, the range of direction was restrictedinto 8-neighbor directions except for the previously chosen one.

    Fig. 10. Difference between original and median filtered images.

    While tracing the crack,wemeasure both thewidth and the lengthofthe crack at the same time. Fig. 11.(b) shows the example of gray-levelprofile as anorthogonal line to theprogressdirection. Since the crackhaslower intensity than the background one, the crack width is defined asthe distance between two inflection points of zero second-orderderivatives in [24]. In order to improve the measurement accuracy, the

    Fig. 12. Panorama image by stitching images and its converted dxf format for BMSdatabase.

  • Fig. 13. Example of supervised manipulations.

    935J.-K. Oh et al. / Automation in Construction 18 (2009) 929–941

    width of the crack is calculated considering the gradient between thecrack and the background. Thus the boundary pixels including thepartial crack information are obtained as a real number, not an integer,from the gradient information of intensity, for example, 0.1 pixels. Since

    Fig.14. Restoration example for the blurred crack image, where (a) Blurred crack image, (b) R

    the width of the crack is calculated using the number of discrete pixelsplus boundary real number pixels, its physical quantity is calculated bythe multiplication of the corresponding (real number) pixels and thepixel resolutionper image. Afterfinishing the bi-directional tracing from

    estoration result of (a), (c) Crack detection result of (a), (d) Crack detection result of (b).

  • Fig. 15. Server computer as a total manager for both robot motion control/monitoring and vision capturing/processing.

    Fig. 16. Task planning example for bridge inspection.

    936 J.-K. Oh et al. / Automation in Construction 18 (2009) 929–941

    the seed point, both directions of the cracks are merged for theaccomplishment of one crack line on the image. In the following section,each crack image is sequentially stitched for one panorama image;ultimately, for thedatabase on the complete crack information for bridgemanagement system (BMS).

    3.2. Post-processing for BMS database

    It is efficient for inspection workers to watch the crack lines at aglance through the panorama image obtained by stitching adjacentimages as shown in Fig. 12. The stitching process is performedsequentially by connecting the same feature points in each image. Andthen, the panorama images including the information of entire cracklines are converted into the file format (here, the dxf formatcompatible with CAD file) required from the bridge managementsystem (BMS). The structure of the dxf file format is carefullyinvestigated to parse the syntax of each component, in order to writethe information of detected cracks into the dxffile format. Fig.12 showsthe result of converted dxf file for the database.

    As a matter of fact, it is very difficult to detect cracks automaticallyfrom noisy images of the real bridge surface; automatic machinevision system may lose track of cracks. Thus the suggested machinevision system offers some utility functions to support the supervisedmanipulations by an inspection worker. These functions include thecapability for adding lines and polygons, for instance, if the inspectionworker wants to add some missed defects such as cracks, waterleakages and scars to the result, he/she can draw the shapes of defectsusing lines and polygons as shown in Fig. 13.

    While inspecting the bridge, camera shaking may occur and thusthe captured imagesmay be blurred due to suddenwind and unknowndisturbances, though the inspection robot system has the suggestedclosed-loop control system for keeping the constant distance andorientations of the camera system. Just in cases, the crack detection

    result from the blurred images does not guarantee the accuracy of thecrack information. Moreover, it is hard to retake a picture in the sameposition. Thus the image de-blurring process technique in [25] isapplied to improve the detection result. The suggested machine visionsystem is able to restore the blurred images using this process. Fig. 14shows the results of restoration and detection for the blurred crackimage. The inspection worker can get not only good visibility but alsothe reliable information of crack detection from the restored image.

    4. System integration and task planning

    4.1. Total system integration

    The multi-linkage system, the inspection robot, and the machinevision systemsuggested in theprevious sections shouldbe integrated intothe specially designed car. Also, the server computer as a total manager

  • Fig. 17. Final appearance of inspection robot after its mock-up.

    937J.-K. Oh et al. / Automation in Construction 18 (2009) 929–941

    has to control the whole functions in real time as shown in Fig. 15. It isimportant for the server computer to be synchronized between the entiremotion control systemand themachine vision for the real timeoperation.First, the server computer sends the first command signal to the sevenDOF's multi-linkage system for unfolding it and then receives themotionstatus data through SCI communication. Secondly, the server computersends the secondcommand signal to the threeDOF's inspection robot andthen receives the motion status data whenever sending the commandsignals for communication handshaking through the CAN communica-tion. Thirdly, the server computer sends the third command signal for thefocal length and image capturing to the machine vision system through

    Fig. 18. 1st experiment: keeping the

    the RS485 communication and then receives the image (640×480) datathrough the BNC cable. Finally, the total manager program in the servercomputer includes two application programs; one is for the motioncontrol systems and the other is for the machine vision system. Thus theinspection worker through the total manager program is able to operatethe entiremotion control system,watch the crack images of lower surfaceof real bridge andmake the database for the biennial inspection report onthe driver's seat.

    The suggested bridge inspection robot systemhas both an automaticinspection mode and a manual inspection mode; both supervised by ahuman. Though we have tried to minimize the role of humans in theautomatic inspection mode from image capturing to the database ofbridgemanagement system, the rolesof humanarepartially required forsuch cases of an emergency stop and task planning. In addition, a singlehuman can tele-operate an entire robot system including machinevision system through the manager program of the server computer inthe manual inspection mode. To begin with, the seven DOF's multi-linkage system is manually unfolded beneath the bridge while storingthemanual operation sequences and quantities into thememory devicein the server computer, and then these unfolding procedures can berepeated automatically for the same bridge by repeating the operationsequences and quantities stored in the server computer. Secondly, afterunfolding the multi-linkage system under the bridge, the last link ofmulti-linkage system is automatically extended with the constantvelocityand the inspection robotmoves to keep the constantorientationof machine vision system and the constant distance between the bridgeandmachinevision system for capturing the crack imageswithoutbeingblurred. Also, the captured images are processed using the suggestedautomatic crack detection algorithm and finally saved as the both formsof CAD files and the corresponding panorama images for the database ofBMS. Though these inspection processes are normally performed in theautomatic control mode supervised by an inspection worker, if theinspection worker wants to inspect the bridge firsthand with his/her

    constant camera orientations.

  • Fig. 19. Result of 1st experiment: set-point regulation control performance.

    938 J.-K. Oh et al. / Automation in Construction 18 (2009) 929–941

    eyes, the manual inspection is also possible with all functions throughtele-operation.

    4.2. Task planning

    Since each bridge has its own shape, task planning is required beforeperforming the bridge inspection. The objective of task planning forbridge inspection is to capture the clean original images. For this, thedistance between the bridge lower surface and machine vision systemshouldbe kept constant for guaranteeing the constant perspective image;also, the orientation of machine vision system should be kept orthogo-nally to the bridge surface for guaranteeing the constant pixel resolutionper image. As one example of task planning, wewill deal with the bridge

    Fig. 20. 2nd experiment: snapshots of fir

    shape as shown in Fig. 16. After analyzing the bridge drawings to beinspected, the positions of the feature points are determined in advanceas the changepoints of bridge shape. In real experiments, according as thelast link of multi-linkage system is extended with a planned constantvelocity, these feature points are detected from the distance variationmeasured by the laser sensor. Once detecting the feature points, theplanned (desired) pan/tilt angles of machine vision system and desireddistancebetween thebridge lower surface andmachinevision systemareset in the control system shown in Fig. 8. The corresponding controllersthen achieve the planned (desired) motions using the gyro sensor andlaser sensor as shown Fig. 16, ultimately, for capturing the clean imageswithout being blurred and for constant pixel resolution per image.

    Also, since the maximum range of up/down motion is at most 1 m,the shapes of some bridges may be over this range. Then we shouldadjust the focal length to sustain the constant pixel resolution byplanning the focal length in advance. It is very important for the tasks(includingmotions and focal length) planning tominimize the operatorintrusion into the automatic inspection mode. In the following section,we suggest the experimental results to show the validity of thesuggested total bridge inspection robot system.

    5. Experimental results

    For field tests, we have designed amock-up for the inspection robotas shown in Fig. 17. The total weight of inspection robot includingmock-up is about 25 kg.

    As a first experiment, the control performance for keeping theconstant orientations of machine vision system from the external

    st real bridge inspection experiment.

  • Fig. 21. 2nd experiment: snapshot of second real bridge inspection experiment.

    Table 4Performance comparison of the four methods.

    Method Proposed method Sobel Canny Fujita's method

    Accuracy 94.1% 87.3% 89.3% 92.2%

    Table 5Measurement accuracy of the crack information.

    Object Detectionrate (%)

    Accuracy for the width of cracks

    Maximum error (mm) Average error (mm)

    One span (80 images) 96.7 0.045 0.023

    939J.-K. Oh et al. / Automation in Construction 18 (2009) 929–941

    disturbances is shown in Fig. 18. The camera orientations are recoveredfast using the nonlinear PD controller suggested in Fig. 8. As aforemen-tioned, for large orientation errors, large control inputs (fast controlmotions) are generated by the squaring effect of errors, and for smallorientation errors, the small control inputs (slow control motions) aregenerated for capturing the images without being blurred. As shown inFig.18, thepanand tiltmechanisms andcontrol systemwere able to keepthe constant absolute orientations from the arbitrarymotions of thebaseplatform. The residual error at steady-state becomes about 0.05 deg foreachcontrol system, but the transienterror can be larger according to thecircumstances. Fig. 19 shows the simple experimental result about thecontrol performance test. For given desired set-points, the suggestednonlinear PD controller shows the good set-point regulation controlperformance while capturing the images in which the settling time isgenerally smaller than 1 s.

    As a second experiment, we tested the suggested robotic systembeneath the real bridges as shown in Figs. 20 and 21. Specifically, wehave experimented at two bridges for field tests, the one is alwaysavailable small-sized bridge at Hanyang University as shown in Fig. 20without governmental permission and the other is the real Gu-Haengju Bridge in Gyeonggi province as shown in Fig. 21 withgovernment permission. The small-sized test bridge has four support-ing beams beneath the bridge as shown in Fig. 24. Here, we haveconducted experiments three items in this environment; firstly, wecould confirmed that the motion control system for the gravitationaldirection (up/down) movement could sustain the specific distancebetween the camera system and bridge lower surface using the laser

    Fig. 22. Results of 3rd experiment: c

    sensor according to the task planning. Secondly, we confirmed that thepan/tilt motion control system could change their orientations todetect cracks at the supporting beams. Thirdly, we confirmed that themachine vision system could detect cracks in real time with 200 msprocessing time as shown in Fig. 20.

    As a third experiment, we evaluate the proposed automatic crackdetectionmethod of themachine vision systemwith noisy images. Thisis because of many problems such as irregularly illuminated condition,shaded and soiled surfaces in capturing the real images of bridge lowersurface. Theseproblemsmay lead to falsedetection. Thus the accuracyofcrack detection in noisy images is an important factor for evaluating theperformance of the bridge inspection robot system. In order to show theeffectiveness of the suggestedmethod,we have compared the proposedmethod with three other methods; Fujita, Sobel and Canny's methodssuggested in [21,26,27]. The Sobel operator in [26] is often used in imageprocessing, particularly, for edge detection. This operator calculates thegradient of the image intensity at each point, giving the direction of thelargest intensity increase from light to dark and the rate of change in thatdirection. Also, the Canny edge detection in [27] has been considered tobe an ideal edge detection algorithm for images which are corruptedwith white noise. Either Sobel or Canny operator is one of the mostuseful methods in computing digital gradients for practical use [28]. Inreal experiments, we made use of 100 noisy images with irregularilluminations, various shaded conditions and blemishes on the concretebridge surface. The resolution of digitized image is 640×480 pixels, and1 pixel corresponds to 0.15 mm. Here, we should notice that theboundarypixels including thepartial crack information are obtained asareal number, not integer, from the gradient information of intensity. Ourquantitative target on the detected crack size is 0.2 mm in widths. TheFig. 22 shows the results of crack detection; (a) and (b) in Fig. 22 are theoriginal image and the result of manual tracing, respectively; (c) is theresult of the proposed method. (d), (e) and (f) are the results of Fujita,

    omparisons of crack detection.

  • Fig. 23. 4th experiment: real-time crack detection and images stitching.

    940 J.-K. Oh et al. / Automation in Construction 18 (2009) 929–941

    Sobel and Canny's method, respectively. In order to compare the resultsof the four methods, we evaluated the similarity between the result ofmanual tracing and the result of eachmethod. Although all themethodsmentioned eliminated the shaded regions well and showed the goodperformance in detecting the major cracks, the relatively small crackswere not found and all detected crackswere displayed thicker than theirreal widths in (d). Also, as shown in (e) and (f), all edges includingshadows from the image were detected as cracks. However, theproposed method could display the more exact widths of the cracksand detect from the large cracks to relatively small ones. From theexperimental results of Fig. 22, it is shown that the cracks over 0.2mminwidths were successfully detected by the proposed method. Also, theTable 4 shows theperformance comparisons by summing the crack linesdetected in eachmethod and then comparing themwith that ofmanualtracing, in a viewpoint of the accuracy of fourmethods. From the Table 4,it is shown that the accuracy of the proposed method is improved byabout 2%–7%, compared with other methods.

    As a fourth experiment, we show the experimental results aboutthe real time automatic crack detection method and the imagestitching. Here, we selected and investigated 1 span of the real bridgein good environmental conditions. To begin with, the precise manualinspection on the lower surface of the selected spanwas performed by

    Fig. 24. 4th experiment: synthesized

    the educated inspectionworker since we should know the exact crackinformation in advance to show the performance of the suggestedautomatic crack detection method. In there, the inspection workerdetected 184 cracks over 0.2 mm in maximumwidths. Now, we wereready to evaluate the performance of the suggested machine visionsystem by comparing the suggested automatic detection results withthe already known results. The suggested machine vision systemcaptured 80 overlapped images on the span. The distance between thebridge lower surface and machine vision system was about 2.3 m. Inthe experiment, the machine vision system detected 178 cracks out ofalready-known 184 cracks and the average width error is about0.023 mm. Table 5 shows the measurement accuracy for the crackinformation. As shown in Table 5, we could confirm that the detectionrate is sufficient and the information of cracks is reliable.

    Also, Fig. 23 shows the results of crack detection and images stitchingfor a panorama image. During the experiment, we could confirm imagesof detected crack, crack widths, crack lengths and result of imagestitching in real time through the vision control window program. Inaddition, the 3-dimensional perspective view of the inspected bridgecould be made after finishing the bridge inspection by using the storedimages in the database of server computer as shown in Fig. 24. TheFig. 24(a) is the captured image of the bridge and Fig. 24.(b) is the

    3D perspective view of a bridge.

  • 941J.-K. Oh et al. / Automation in Construction 18 (2009) 929–941

    synthesized 3D view that consists of total images. This function couldhelp the inspection worker to confirm the overall status of the bridgethrough the database.

    6. Conclusion

    A robotic system for bridge inspection has been developed forpractical use with both automatic inspection mode and manualinspection (tele-operation) mode; both supervised by a human. Inaddition, this robotic system has been developed for batch processesto write the bridge safety diagnosis reports from the image capturingusing robot motion control to the bridge management system. Theproposed bridge inspection system is composed of three main parts; aspecially designed car, robot mechanism and control system formobility and machine vision system for the automatic detection ofcrack lines. Through experiments on real bridges, we have shown thatthe suggested robot motion control method can performwell and theproposed crack detecting and tracing algorithms were also effective insearching for crack lines and other measurable physical properties ofbridge structures.

    Acknowledgements

    This work was supported in part by the Bridge Inspection RobotDevelopment Interface Program of the Ministry of Construction andTransportation and in part by a Korea Science and EngineeringFoundation (KOSEF) grant funded by the Korea government (MEST)(R01-2008-000-20631), and in part by the Ministry of KnowledgeEconomy(MKE) and Korea Industrial Technology Foundation (KOTEF)through the Human Resource Training Project for Strategic Technologyand in part by the Gyeonggi Regional Research Center Program and inpart by the Research fund of HYU (HYU-2009-T), Republic of Korea.

    References

    [1] Bridge Inspection Robot Development Interface (BIRDI), Development of high-tech automatic robot system for bridge inspection and monitoring, Technicalreport for Ministry of Construction and Transportation (in Korean), 2007.

    [2] Federal Highway Administration (FHWA), Bridge Inspections Training Manual,July 1991.

    [3] BridgeMaintenance Training Manual, US Federal Highway Administration, FHWA-HI-94-034, Prepared by Wilbur Smith Associates, 1992.

    [4] Shibata Tsutomu, Shibata Atsushi, Summary report of research and study on robotsystems for maintenance of highways and bridges, Robot, vol. 118, JARA, Tokyo,Japan, Sep. 1997, pp. 41–51.

    [5] Product Catalog, Paxton-Mitchell Snooper® Series 140, http:// www.paxton-mitchell.com.

    [6] J.-K. Oh, A.-Y. Lee, S.M. Oh, Y. Choi, B.-J. Yi, H.W. Yang, Design and control of bridgeinspection robot system, IEEE Int. Conf. onMechatronics and Automation, Aug. 2007,pp. 3634–3639.

    [7] J.E. Ed Vault, Robot system underwater inspection of bridge piers, IEEE Instrumenta-tion and Measurement Magazine 33 (Sep. 2000) 32–37.

    [8] F. Xu, X. Wang, Design and experiments on a new wheel-based cable climbingrobot, IEEE/ASME International Conference on Advanced IntelligentMechatronics,Jul. 2008, pp. 418–423.

    [9] L. Briones, P. Bustamante, M.A. Serna,Wall-climbing robot for inspection in nuclearpower plants, IEEE International Conference on Robotics and Automation, 1994,pp. 1409–1414.

    [10] G.L. Rosa, M. Messina, G. Muscato, A low-cost lightweight climbing robot for theinspection of vertical surfaces, Mechatronics 12 (1) (2002) 77–96.

    [11] H. Schempf, E. Mutschler, V. Goltsberg, W. Crowley, GRISLEE: gas main repair andinspection system for live entry environments, The International Journal of RoboticsResearch 22 (7–8) (Jul. 2003) 603–616.

    [12] D.Huston, B. Esser, J.Miller, X.Wang, Robotic andmobile sensor systems for structuralhealth monitoring, Proc. of the 4th International Workshop on Structural HealthMonitoring, Stanford University, Sep. 2003.

    [13] K.S. Sunil, W.F. Paul, Automated detection of cracks in buried concrete pipe images,Automation in Construction 15 (1) (2006) 58–72.

    [14] E.R. Davies, Machine vision: theory, algorithms, practicalities, 3rd ed., MorganKaufmann, 2005 isbn:0122060938.

    [15] B.C. Kuo, F. Golnaraghi, Automatic Control Systems, 8th ed., Wiley, 2002.[16] K. Ogata, Modern Control Engineering, 4th ed., Prentice Hall, 2002.[17] Y. Choi, W.K. Chung, PID Trajectory Tracking Control for Mechanical Systems,

    Springer Publishing Co. as Lecture Notes in Control and Information Sciences(LNCIS Series No. 298), 2004.

    [18] H.-G. Sohn, Y.-M. Lim, K.-H. Yun, G.-H. Kim, Monitoring crack changes in concretestructures, Computer-Aided Civil and Infrastructure Engineering 20 (1) (Nov. 2004)52–61.

    [19] P. Tung, Y. Hwang, M. Wu, The development of a manipulator imaging system forbridge crack inspection, Automation in Construction 11 (6) (Oct. 2002) 717–729.

    [20] S. Yu, J. Jang, C. Han, Auto inspection system using a mobile robot for detectingconcrete cracks in a tunnel, Automation in Construction 16 (May 2007) 255–261.

    [21] Y. Fujita, Y. Mitani, Y. Hamamoto, A method for crack detection on a concretestructure, ICPR, Hong Kong, Aug. 2006, pp. 901–904.

    [22] T. Yamaguchi, S. Hashimoto, Image processing based on percolation model, IEICETransactions on Information and Systems E89-D (7) (Jul. 2006) 2044–2052.

    [23] S. Iyer, S.-K. Sinha, Segmentation of pipe images for crack detection in buriedsewers, Computer-Aided Civil and Infrastructure Engineering 21 (6) (Aug. 2006)395–410.

    [24] L.-C. Chen, Y.-C. Shao, H.-H. Jan, C.-W. Huang, Y.-M. Tien, Measuring system forcracks in concrete using multi-temporal images, Journal of Surveying Engineering132 (2) (May 2006) 77–82.

    [25] M.-E. Moghadam, M. Jamzad, Linear motion blur parameter estimation in noisyimages using fuzzy sets and power spectrum, EUROSIP Journal on Advances inSignal Processing 2007 (2007) 1–8.

    [26] I. Sobel, G. Feldman, A 3×3 isotropic gradient operator for image processing,presented at a talk at the Stanford Artificial Project, 1968.

    [27] J. Canny, A computational approach to edge detection, IEEE Transactions onPattern Analysis and Machine Intelligence 8 (1986) 679–714.

    [28] A.-Q. Ikhlas, A. Osama, M. A sce, E.-K. Michael, Analysis of edge-detection techniquesfor crack identification in bridges, Journal of Computing in Civil Engineering 17 (3)(Oct. 2003) 255–263.

    [29] S.H. Crandall, N.C. Dahl, T.J. Lardner, An introduction to the mechanics of solids,2nd ed., McGraw Hill, 1999.

    http://http://www.paxton-mitchell.comhttp://www.paxton-mitchell.com

    Bridge inspection robot system with machine visionIntroductionRobot mechanism and control systemOverviewMechanism and control of inspection robot

    Machine vision systemCrack detection and tracingPost-processing for BMS database

    System integration and task planningTotal system integrationTask planning

    Experimental resultsConclusionAcknowledgementsReferences