[lecture notes in computer science] computer vision and graphics volume 6374 || localisation and...

8
Localisation and Tracking of an Airport’s Approach Lighting System Shyama Prosad Chowdhury 1 , Karen Rafferty 1 , and Amit Kumar Das 2 1 School of EEECS, Queen’s University Belfast, UK 2 CST Dept., Bengal Engineering and Science University, Shibpur, India [email protected], [email protected], [email protected] Abstract. In this paper, we develop novel methods for extracting and tracking regions of interest from a given set of images. In particular, it is our aim to extract information about luminaires making up an airport landing lighting pattern in order to assess their performance. Initially to localise the luminaires we utilise sub pixel information to accurately locate the luminaire edges. Once the luminaires are located within the data, they are then tracked. We propose a new tracking algorithm based on control points and building blocks. Tests performed on a set of 422 images taken during an approach to an airport in Northern Ireland have shown that when combined the localisation and tracking techniques are very effective when compared to standard techniques (KLT and SIFT) as well as to model based matching technique for this application. Keywords: Photometrics, vibration, luminaire localisation, tracking. 1 Introduction The airport landing lighting at an airport is used to provide visual cues to a pilot regarding the position and direction of the runway when approaching a given airport. It is important that all the luminaires in the pattern perform according to the standards set by the aviation governing bodies as described in [8]. To date, no physical system exists which can assess the performance of the complete lighting pattern. We propose using a remote camera based assessment technique to solve this problem. A camera is placed inside an aircraft and used to record images of the luminaires during an approach. These images can then be post processed to determine a performance metric for the lighting pattern. In this paper, we aim to provide a solution to the problem of localising and tracking of luminaires within the image data. Section 3 describes the localisation method of the luminaires using sub pixel analysis. Performance assessment of the luminaires is highly dependent on the accurate tracking of each luminaire. Due to the vibration in aircraft images may become blurred which makes tracking difficult. A novel vibration correction method is described in section 4. Finally the method of tracking each luminaire in the image sequence is outlined in section 5. Note, this paper does not aim to document the techniques for performance assessment using the extracted information. L. Bolc et al. (Eds.): ICCVG 2010, Part I, LNCS 6374, pp. 19–26, 2010. c Springer-Verlag Berlin Heidelberg 2010

Upload: konrad

Post on 23-Dec-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Localisation and Tracking of an Airport’s

Approach Lighting System

Shyama Prosad Chowdhury1, Karen Rafferty1, and Amit Kumar Das2

1 School of EEECS, Queen’s University Belfast, UK2 CST Dept., Bengal Engineering and Science University, Shibpur, India

[email protected], [email protected], [email protected]

Abstract. In this paper, we develop novel methods for extracting andtracking regions of interest from a given set of images. In particular, it isour aim to extract information about luminaires making up an airportlanding lighting pattern in order to assess their performance. Initiallyto localise the luminaires we utilise sub pixel information to accuratelylocate the luminaire edges. Once the luminaires are located within thedata, they are then tracked. We propose a new tracking algorithm basedon control points and building blocks. Tests performed on a set of 422images taken during an approach to an airport in Northern Ireland haveshown that when combined the localisation and tracking techniques arevery effective when compared to standard techniques (KLT and SIFT)as well as to model based matching technique for this application.

Keywords: Photometrics, vibration, luminaire localisation, tracking.

1 Introduction

The airport landing lighting at an airport is used to provide visual cues to apilot regarding the position and direction of the runway when approaching agiven airport. It is important that all the luminaires in the pattern performaccording to the standards set by the aviation governing bodies as described in[8]. To date, no physical system exists which can assess the performance of thecomplete lighting pattern. We propose using a remote camera based assessmenttechnique to solve this problem. A camera is placed inside an aircraft and usedto record images of the luminaires during an approach. These images can thenbe post processed to determine a performance metric for the lighting pattern.

In this paper, we aim to provide a solution to the problem of localising andtracking of luminaires within the image data. Section 3 describes the localisationmethod of the luminaires using sub pixel analysis. Performance assessment of theluminaires is highly dependent on the accurate tracking of each luminaire. Dueto the vibration in aircraft images may become blurred which makes trackingdifficult. A novel vibration correction method is described in section 4. Finallythe method of tracking each luminaire in the image sequence is outlined in section5. Note, this paper does not aim to document the techniques for performanceassessment using the extracted information.

L. Bolc et al. (Eds.): ICCVG 2010, Part I, LNCS 6374, pp. 19–26, 2010.c© Springer-Verlag Berlin Heidelberg 2010

20 S.P. Chowdhury, K. Rafferty, and A.K. Das

2 Related Research

Segmentation is mainly conducted using the shape, texture or colour informationfor an object. Saha et al. [9] proposes a convex hull based technique to segmentout prominent objects from a scene. They have considered that there should beone prominent object in the scene. However, in this research there are multi-ple objects of interest per image. Nhat et al. proposed a different segmentationtechnique in [3] using a combinatorial graph. This works well for segmenting over-lapped objects where the prominent edge information of the objects is presentexcept in the overlapped region. In this research there is no such prominent edgeinformation on overlapping objects.

Because of the large number of luminaires in the lighting pattern and thedistance between this and the camera, all the luminaires may appear in a smallregion of the image and it is very difficult to uniquely identify each luminaire.Any incidence of misinterpretation yields a high negative impact on the ac-curacy of the performance assessment [1]. Thus prediction of the position ofluminaires in the image based on computer vision [5] or random sample consen-sus (RANSAC) based tracking does not suit properly. Niblock et al. used modelbased tracking [6] to identify luminaires in the collected images. One of the lim-itations of this work is that accuracy of the tracking is dependant on imagequality and it is very difficult to isolate luminaires which are very close togetherin the image data. Existing known tracking techniques within the community, theKanade−−Lucus−−Tomasi (KLT) and scale invariant feature transform (SIFT)technique also show very poor success rate [4]. KLT performs at ∼ 60% accuracywhereas SIFT shows only ∼ 20% success rate where,

Success rate(%) =number of correctly identified luminaires

number of luminaires in the image100 . (1)

This illustrates the need to develop other tracking techniques for this application.

3 Sub Pixel Luminaire Analysis

Widely used and primitive techniques for localising bright objects from amonochrome image normally utilises one or more threshold values [2], [7], [10].When such techniques were applied to the collected images runway markings,the sky and luminaires were all identified. Therefore we developed a localisationmethod based on the sub-pixel data. This is now discussed.

i) Iso-illuminated Ridge (IIR) Image Construction: Each pixel in animage stores a number which relates to the colour of that pixel. For a single bytecamera, each pixel can take a value ranging from 0 (black) to 255 (pure white).To date, segmentation techniques have utilised this single pixel value. However,each byte of information is made up from 8 single bits, each of which holdinformation regarding that pixel. The luminaire is circular in nature and in thecentre it normally has a high intensity which decreases towards the perimeter.

Localisation and Tracking of an Airport’s Approach Lighting System 21

5 10 15 200

32

64

96

128

160

192

224

Consecutive pixels

Inte

nsity

val

ue

Originalintensity

OmittingMSB &multiplying 2

Omittingtwo MSBs &multiplying 4

(a) (b) (c)

Fig. 1. (a) Changes in the intensity for different bit movements (b) Original image ofthe lighting pattern (c) 7-IIRL image

A cross section of the intensity profile along the diameter of a luminaire resultsin a bell shaped curve (Fig. 1(a)). In a byte, 8 bits (0 to 7) have their differentpositional weightage factor. Typically the bit at location 7, is termed the mostsignificant bit (MSB). And the bit at location 0, is termed the least significant bit(LSB).By omitting the MSB the highest value of that byte will decreased to 127.To keep the highest value in the same range, the value is multiplied by a factor2. Similarly after omitting the two MSBs, the value is multiplied by a factor 22.After the omission of the MSB and the two MSBs changes in the bell shapedcurve is demonstrated in Fig. 1(a) which show a significant edge profile for theluminaire, making it easier to detect. In general α-IIR signifies the omission ofall the bits in position same or higher than α followed by a multiplication with28−α. Let the function bI(x) give the value of the xth bit of the intensity valueI then in α-IIR image, the modified value of I will be

Iα =α−1∑

x=0

bI(x)2x+1 . (2)

ii) Iso-illuminated Ridge Line (IIRL) Marking: When the ridges are con-structed, a discontinuity in the ridge can be associated with the edge of a lu-minaire. Fig. 1(b) shows a image taken from the aircraft during landing and

(a) (b)

Fig. 2. (a) Part of the pattern in 7-IIRL image (b) Part of the pattern in 6-IIRL image

22 S.P. Chowdhury, K. Rafferty, and A.K. Das

Fig. 1(c) illustrates the results when 7-IIRL is applied to that image, with a“close up” given in Fig. 2(a). For comparison the results of 7-IIRL for the samesection is shown in Fig. 2(b). By using either 7-IIRL or 6-IIRL only the lumi-naires within the image are localised. This is a very positive result. Indeed thistechnique would be applicable in the other research domain as a reliable meansof detecting light sources in image data.

4 Skew Correction and Region of Interest

Having identified each luminaire with an IIRL, it is now necessary to correlateeach luminaire with a physical description of the lighting pattern in order touniquely identify each luminaire. However because of aircraft movement, imagesof the lighting are frequently rotated in the 2D image plane of the camera. Inan ideal situation, the centreline and crossbars (Fig. 3(a)) should appear in thevertical and horizontal directions of the image plane. Typically however theyappear skewed within the image. It is necessary to remove the skew to simplifyluminaire tracking. To do this, we propose a two step technique.

(b) 3CBL

2CBL

1CBL

CL1

CL2

CL3

CL4

CL5

CL6

CB1R

CB2R

CB3R

CB4R

CB5

1

R

CCB

2CCB

3CCB

4CCB

5CCB

5CBL

L4CB

(a) (c) (d)

Fig. 3. (a) Major skew correction (b) Composite structure after closing (c) Detectedcrossbars after opening (d) Control blocks formation using the lighting pattern

4.1 Major Skew Correction (MSC)

In order to quickly identify any major skew within the image of the lighting itis necessary to define control points (CP) in the pattern. The top luminaire inthe centreline is automatically chosen as CP-1. From CP-1 two straight linesare formed on both the left and right side of the pattern. Initially the two linesare projected horizontally in the image. The left line is rotated anticlockwiseuntil an intersection occurs between it and the IIRL. A similar process is carried

Localisation and Tracking of an Airport’s Approach Lighting System 23

out for the right line except it is rotated in the clockwise direction. Most likelythis intersection will occur at the top crossbar (see Fig. 3(a)), however this cannot be guaranteed in the case of missing luminaires. In that situation using thenearest intersected point another end is obtained. Let the final touch points onthe same crossbar be the P l and P r. For any point x, the function MV P (x)finds the middle point of the vertical cross section of the component. The anglefunction AN(x, y) finds the angle in between two points x and y. Thus the anglebetween the points MV P (P l) and MV P (P r) are measured in βM ,

βM = AN(MV P (P l), MV P (P r)) . (3)

The image is then rotated in −βM direction to correct the major skew. Usingthis technique most of the skew can be removed from the image.

4.2 Finer Skew Correction (FSC)

FSC is achieved by measuring the deflection of the crossbars from the horizontaldirection. In order to determine the crossbars, morphological filters are utilised.Morphological operations are undertaken using a variable length structuringelement (VLSE).

i) Composite Structure Construction: At any row position i, the functionASC(i) gives the average horizontal spread of the centreline. The approximatedrow position of the first and second crossbars are labelled as R1 and R2 if theyare present in the image. Let V LSEC be used for morphological closing where,the height and width of this VLSE are denoted by V LSEH

C and V LSEWC . Both

V LSEHC and V LSEW

C are two dimensional image functions where, V LSEHC (i, j)

and V LSEWC (i, j) represents the height and width of the VLSE on a pixel in

ith row and jth column. Here V LSEHC (i, j) = λ1ASC(i) and V LSEW

C (i, j) =λ2ASC(i) where, the value of λ1 is kept 2 and λ2 is 3. The reason for choosingthese particular λ values is to ensure that the morphological closing operationshould not miss any present luminaires. Morphological closing on an IIRL imagewith V LSEC results in a single structure with all the luminaires. This imagewill be known as IIRLC for future use. Fig. 3(b) shows the composite structureafter the morphological closing operation.

ii) Crossbars Extraction: Let V LSEO be used for morphological openingwhere, height and width of this VLSE are denoted by V LSEH

O and V LSEWO .

Opening is performed with a horizontal line like structuring element (SE). Thus,V LSEH

O is of a constant value 1 and V LSEWO (i, j) = λASC(i). The value of

λ is kept as 2 to ensure that the opening element will definitely remove thecentreline links in between two crossbars to form them like horizontal lines.Fig. 3(c) illustrates the results after VLSE.

iii) Deflection Angle Calculation: Connected component analysis (CCA) onthe residual after binary morphology opening will label all the crossbars. Letus assume a total T (1 ≤ T ≤ 5) number of components are present where thespread of the kth component is from the point P l

k to P rk . The distance function

DS(x, y) determines the distance in between two points x and y. As defined

24 S.P. Chowdhury, K. Rafferty, and A.K. Das

earlier, the angle function AN(x, y) measures the angle. Average deflection ofall the crossbars from horizontal line is calculated as βF where,

βF =T∑

k=1

DS(P lk, P r

k )AN(P lk, P r

k ) /

T∑

k=1

DS(P lk, P r

k ) . (4)

The IIRLC image is rotated with the −βF angle to get the IIRLCR which isfully skew corrected (< 0.5◦) and all the crossbars become horizontally aligned.

Finally having identified each crossbar and the centreline a number of controlblocks can be automatically defined using segmentation techniques. The finaldefined 21 control blocks are shown in Fig. 3(d).

5 Pattern Based Tracking of ALS Luminaires

At this stage a different tracking scheme is applied to the centreline (CLi) , wing((CBL

i ), (CBRi )) and centre body blocks ((CBC

i )).

i) Tracking the Centreline Blocks: In each of the centreline blocks, binarymorphological closing is done with a VLSE (V LSEO). For a given block if theexpected number of luminaires are not found, then the centreline block whichis in between two crossbars CBk and CBk+1 are vertically divided into fourregions. However, the division of the regions can not be linear because of theperspective error. Using all the vertical distances in between crossbars, amountof perspective error is calculated and then corrected. Now the sets in a block(k) is horizontally analysed in the same method as described for the centre bodyblocks tracking. This uses the virtual central line (V CLk) which is defined as thethe linear regression calculated on all the luminaires present in the block CBk.

ii) Tracking for Centre Body Blocks: Use of the V CLk is used to analysethe (k − 1)th centre body (CBC

k−1). If there are three luminaires in the centrebody block’s pattern then V CLk is used to track the position of the middleone and the position of the others two can be determined from that. Wheretwo luminaires are in the centre body block, V CLk will act as a clear separatoramong them.

iii) Tracking for Wing Blocks: It is observed from the ALS pattern that theintermediate distance among the luminaires in the wing blocks are highest com-pare to any other blocks. Both of the left and right wings are analysed togetherto find the median distance among the horizontally distributed luminaires. Usingthis distance and the absence history all the luminaires in the wing are tracked.

6 Results

The localisation and tracking algorithms were applied to two sets of videos takenduring an approach to an airport in Northern Ireland. Each video had approxi-mately 200 images. Thus over 400 images were tested using the new algorithms.

Localisation and Tracking of an Airport’s Approach Lighting System 25

Initially when the IIR analysis was applied to the images, it was shown thatremoval of the two MSBs from the image data, could effectively create a largeintensity shift at the edges of the luminaire, aiding in its detection. It was foundthat this technique was also very effective and its accuracy is 100% for localisingall the prominent luminaires. In addition, a noticeable benefit to using the IIRanalysis, is that it was also very effective for isolating very small luminaires in theimage data. For example, a number of luminaires at the top of the pattern onlycover 7 pixels at the start of the approach. This outperforms other publishedtechniques for this applications [6].

20 40 60 80 100 120 1400

20

40

60

80

100

120

Consecutive frames

Num

ber

of lu

min

aire

s

Present luminairesin image

Pattern basedtracking

Model basedmatching technique

20 40 60 80 100 120 14040

50

60

70

80

90

100

Consecutive frames

Succ

ess

rate

(%

)

Pattern basedtracking

Model basedmatching technique

(a) (b)

Fig. 4. (a) Number of luminaires per image, the number of luminaires correctly trackedby the Pattern based tracking and the Model based matching technique (b) Successrate (%) of the correctly tracked luminaires

Having successfully located the luminaires within the image data, the devel-oped tracking technique was applied to the images. Success rate on the trackingis measured in each frame by the ratio of the correctly tracked luminaires andtotal number of luminaires visible in the image (Eq. 1). It has already reportedin [4] that the existing KLT and SIFT technique gives < 60% and < 20% accu-racy in tracking of the luminaires and model based matching technique producesmuch better result compare to the others. So, we only compared our result (pat-tern based tracking) with the model based matching (Fig. 4(a) and 4(b)). Inour method, when a luminaire is clearly distinguishable within the images, thetracking accuracy is 100%. However, this reduced when the luminaires are highlyoverlapped. Thus in terms of the tracked luminaires in the video sequence anoverall success rate of 91% for tracked luminaires was achieved. This again per-forms well when compared to other published tracking algorithms in the area[4]. It is also interesting to note that the processing time for the localisation andtracking of luminaires per image was found to take 1 second (for Model basedmatching technique it is 2.3s/image), when running on a standard pc. Whilstnot real time, the processing speed is still very acceptable.

7 Conclusion

The authors present new techniques for localising and tracking luminaires withinimages. In particular, the techniques have been applied to images of an airport

26 S.P. Chowdhury, K. Rafferty, and A.K. Das

lighting pattern that were collected during an approach to an airport in North-ern Ireland. The new technique for localising the luminaires utilises sub pixelinformation in order to determine the edges of the luminaires. This techniquewould work well for any application that requires the localisation of a light sourcewithin an image. Tracking of a known pattern is discussed and performs verywell. Again this tracking technique could be applied to any video sequence wherethere is a known pattern of points of interest. Finally it can be concluded thatthe complete preprocessing presented here is very useful for the target applica-tion. Future work will concentrate on using the information extracted from theimages to determine the performance metric of the lighting.

Acknowledgements. The authors would like to thank the EPSRC (Grant:EP/D05902X/1) for financial backing. The contribution of Flight Precision andBelfast International Airport for providing flight time in order to collect airportlighting data is also gratefully acknowledged.

References

1. Chowdhury, S., McMenemy, K., Peng, J.: Performance evaluation of airport lightingusing mobile camera techniques. In: Jiang, X., Petkov, N. (eds.) CAIP 2009. LNCS,vol. 5702, pp. 1171–1178. Springer, Heidelberg (2009)

2. Liu, J.: Robust image segmentation using local median. In: Proc. of the 3rd Cana-dian Conf. on Computer and Robot Vision, CRV 2006. IEEE Computer SocietyPress, Los Alamitos (2006)

3. Nhat, V., Manjunath, B.: Shape prior segmentation of multiple objects with graphcuts. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition,CVPR 2008, June 23-28, pp. 1–8 (2008)

4. Niblock, J., Peng, J., McMenemy, K., Irwin, G.: Fast model-based feature match-ing technique applied to airport lighting. IET Science, Measurement and Technol-ogy 2(3), 160–176 (2008)

5. Niblock, J., McMenemy, K., Ferguson, S., Peng, J.: Autonomous tracking systemfor airport lighting quality control. In: 2nd International Conference on ComputerVision Theory and Applications, March 8-11, vol. (2), pp. 317–324 (2007)

6. Niblock, J., Peng, J., McMenemy, K., Irwin, G.: Autonomous model-based objectidentification and camera position estimation with application to airport lightingquality control. In: Proc. of 3rd Int. Conference on Computer Vision Theory andApplications, VISAPP, Funchal, Portugal, January 12-15, vol. (2) (2008)

7. Okada, K., Akdemir, U.: Blob segmentation using joint space-intensity likelihoodratio test: application to 3d tumor segmentation. In: IEEE Conf. on Comp. Visionand Pattern Recognition. CVPR 2005, June 20-25, vol. 2, pp. 437–444 (2005)

8. Organization, I.C.A.: Aerodrome design and operations, annex 14, 4th edn., vol. 14(July 2004)

9. Saha, S., Das, A., Chanda, B.: An automatic image segmentation technique basedon pseudo-convex hull. In: Proc. of Indian Conference on Computer Vision, Graph-ics and Image Processing, ICCVGI 2006 (2006)

10. Wong, W., Chung, A.: Bayesian image segmentation using local iso-intensity struc-tural orientation. IEEE Transaction on Image Processing 14(10), 1512–1523 (2005)