automated classification of underwater multispectral … classification of underwater multispectral...

8
Automated classification of underwater multispectral imagery for coral reef monitoring A. C. R. Gleason, R. P. Reid Division of Marine Geology and Geophysics, University of Miami / RSMAS 4600 Rickenbacker Cswy. Miami, FL 33149 USA K. J. Voss Department of Physics, University of Miami Miami, FL 33124 USA Abstract- Survey protocols for monitoring the composition and status of coral reef benthic communities vary in the level of detail acquired, but fundamentally follow one of two approaches: 1) a diver identifies organisms in the field, 2) an analyst identifies organisms from underwater imagery (photos or video). Both methods are highly labor intensive and require a trained biologist / geologist. A method for automated classification of reef benthos would improve coral reef monitoring by reducing the cost of data analysis. Spectral classification of standard (three-band) color underwater imagery does not work well for distinguishing major bottom types. Recent publications of hyperspectral reflectance of corals, algae, and sediment, on the other hand, suggest that careful choice of narrow (~10 nm) spectral bands might improve classification accuracy relative to the three wide bands available on commercial cameras. We built an underwater multispectral camera to test whether narrow spectral bands were actually superior to standard RGB cameras for automated classification of underwater images. A filter wheel was used to acquire imagery in six 10 nm spectral bands, which were chosen from suggestions in the literature. Results indicate that the algorithms suggested in the literature require very careful compensation for variable illumination and water column attenuation for even marginal success in classifying underwater imagery. On the other hand, a new algorithm, based on the normalized difference ratio of images at 568 nm and 546 nm can reliably segment photosynthetic organisms (corals and algae) from non-photosynthetic background. Moreover, when this new algorithm is combined with very simple texture segmentation, the general cover classes of coral and algae can be discriminated from the image background with accuracies on the order of 80%. These results suggest that a combination of high spectral resolution and texture-based image segmentation may be an optimal methodology for automated classification of underwater coral reef imagery. I. INTRODUCTION Underwater video and still imagery are useful tools for coral reef monitoring programs because they speed up the data acquisition process, allowing more sites to be visited for a Funding for this project was provided by the US Department of Defense SERDP Program, Award CS 1333. given allotment of field time and because imagery can be acquired by divers with minimal biological training. The disadvantage of the image-based approach is the time required to extract useful ecological information from the underwater imagery. Currently state-of-the-art techniques require identifying organisms and substrate under a number of random points placed on each image (point counting) or tracing individual objects. Such manual processing of each video/still frame is not only labor intensive, but also requires an analyst able to identify coral reef organisms. Extracting data from underwater imagery is thus a major portion of the budget for coral reef monitoring programs. Any procedure that automates or streamlines part of the image analysis process could, therefore, significantly reduce costs. Indeed, several software packages designed to streamline the point counting process have been released in the past decade 1 [1]. These packages do not actually automate the extraction of data; rather they streamline the process by facilitating manual data entry and storage. Color matching, laser line scan imagery, and texture segmentation are three approaches that have been explored previously in attempts to automate the classification of underwater imagery. Classification using color segmentation of standard broad-band imagery has not been successful [2]. Interactive classification using color matching was more successful [2], but very time consuming. Underwater laser line scan imagery showed potential for automatically classifying the seabed in coral reef environments [3], but laser line scan instruments are currently far too expensive for routine coral reef monitoring projects. Recently, several groups have initiated efforts to automate classification of underwater imagery using image texture in combination with neural network [4, 5] or support vector machine [6] classifiers. Such an approach may prove fruitful, but requires many training images (or portions of images) and significant computing resources. 1 For example PointCount99 (http://www.cofc.edu/~coral/pc99/pc99.htm) and Vidana (http://www.projects.ex.ac.uk/msel/vidana/)

Upload: vuongngoc

Post on 30-Apr-2018

234 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Automated classification of underwater multispectral … classification of underwater multispectral imagery for coral reef monitoring A. C. R. Gleason, R. P. Reid Division of Marine

Automated classification of underwater multispectral imagery for coral reef monitoring

A. C. R. Gleason, R. P. Reid

Division of Marine Geology and Geophysics, University of Miami / RSMAS 4600 Rickenbacker Cswy. Miami, FL 33149 USA

K. J. Voss

Department of Physics, University of Miami Miami, FL 33124 USA

Abstract- Survey protocols for monitoring the composition and

status of coral reef benthic communities vary in the level of detail acquired, but fundamentally follow one of two approaches: 1) a diver identifies organisms in the field, 2) an analyst identifies organisms from underwater imagery (photos or video). Both methods are highly labor intensive and require a trained biologist / geologist. A method for automated classification of reef benthos would improve coral reef monitoring by reducing the cost of data analysis.

Spectral classification of standard (three-band) color underwater imagery does not work well for distinguishing major bottom types. Recent publications of hyperspectral reflectance of corals, algae, and sediment, on the other hand, suggest that careful choice of narrow (~10 nm) spectral bands might improve classification accuracy relative to the three wide bands available on commercial cameras. We built an underwater multispectral camera to test whether narrow spectral bands were actually superior to standard RGB cameras for automated classification of underwater images. A filter wheel was used to acquire imagery in six 10 nm spectral bands, which were chosen from suggestions in the literature.

Results indicate that the algorithms suggested in the literature require very careful compensation for variable illumination and water column attenuation for even marginal success in classifying underwater imagery. On the other hand, a new algorithm, based on the normalized difference ratio of images at 568 nm and 546 nm can reliably segment photosynthetic organisms (corals and algae) from non-photosynthetic background. Moreover, when this new algorithm is combined with very simple texture segmentation, the general cover classes of coral and algae can be discriminated from the image background with accuracies on the order of 80%.

These results suggest that a combination of high spectral resolution and texture-based image segmentation may be an optimal methodology for automated classification of underwater coral reef imagery.

I. INTRODUCTION

Underwater video and still imagery are useful tools for coral reef monitoring programs because they speed up the data acquisition process, allowing more sites to be visited for a Funding for this project was provided by the US Department of Defense SERDP Program, Award CS 1333.

given allotment of field time and because imagery can be acquired by divers with minimal biological training.

The disadvantage of the image-based approach is the time required to extract useful ecological information from the underwater imagery. Currently state-of-the-art techniques require identifying organisms and substrate under a number of random points placed on each image (point counting) or tracing individual objects. Such manual processing of each video/still frame is not only labor intensive, but also requires an analyst able to identify coral reef organisms.

Extracting data from underwater imagery is thus a major portion of the budget for coral reef monitoring programs. Any procedure that automates or streamlines part of the image analysis process could, therefore, significantly reduce costs. Indeed, several software packages designed to streamline the point counting process have been released in the past decade1 [1]. These packages do not actually automate the extraction of data; rather they streamline the process by facilitating manual data entry and storage.

Color matching, laser line scan imagery, and texture segmentation are three approaches that have been explored previously in attempts to automate the classification of underwater imagery. Classification using color segmentation of standard broad-band imagery has not been successful [2]. Interactive classification using color matching was more successful [2], but very time consuming. Underwater laser line scan imagery showed potential for automatically classifying the seabed in coral reef environments [3], but laser line scan instruments are currently far too expensive for routine coral reef monitoring projects. Recently, several groups have initiated efforts to automate classification of underwater imagery using image texture in combination with neural network [4, 5] or support vector machine [6] classifiers. Such an approach may prove fruitful, but requires many training images (or portions of images) and significant computing resources.

1 For example PointCount99 (http://www.cofc.edu/~coral/pc99/pc99.htm) and Vidana (http://www.projects.ex.ac.uk/msel/vidana/)

Page 2: Automated classification of underwater multispectral … classification of underwater multispectral imagery for coral reef monitoring A. C. R. Gleason, R. P. Reid Division of Marine

Several studies have shown that, at least in principle, the hyperspectral reflectance of coral reef organisms can be used to discriminate functional groups (e.g. corals, algae, and sediment) [7-9], identify the "health" of coral tissue (live, dead, bleached) [10-14], and map coral reef communities [15, 16]. The major contribution of these efforts has been to suggest spectral bands that might be optimum for mapping and monitoring coral reefs from satellite or airborne imagery.

The initial goal of the present work was to test whether the spectral bands suggested in [7-14] could be used to automate the classification of underwater imagery (Fig. 1). A computer-controlled underwater camera with a filter wheel that holds six narrow-band (10 nm) interference filters was used to acquire multispectral images both in salt water tanks at the University of Miami and on coral reefs in the Bahamas and Florida Keys.

Attempts to classify the multispectral images using the algorithms suggested by [7-14] were not successful. As a byproduct of those experiments, however, it was noted that spectral bands near 545 and 570 nm were repeatedly identified as significant in discriminating live coral from other spectra. A ratio of the two bands chosen in that spectral range (546 and 568 nm) was able to segment coral and algae, together, from other objects but was not able to separate coral from algae.

Since segmenting coral and algae, together as one class, is not especially useful for ecological purposes, image texture analysis methods were investigated to determine if a simple spatial technique could assist the spectral processing by separating coral and algae into two distinct classes.

The revised goal of this work, therefore, was to test whether simple texture metrics in combination with narrow-band spectral imagery could automatically classify basic bottom cover types associated with coral reefs, specifically: coral, algae, and "other". The general approach was first to segment coral and algae from the background using the normalized difference ratio of images at 546 and 568 nm, then to segment coral from algae using texture metrics computed from grey level co-occurrence matrices.

II. METHODS

A. Data Acquisition An underwater, six band, multispectral camera (MSCAM)

was constructed for image acquisition. The main components of the system included an Apogee Instruments Inc. AP47p CCD camera, Homeyer motorized filter wheel with space for six 25.4 mm interference filters, a Tamron 24mm, f/2.5 lens, and a single board computer for controlling the camera and filter wheel as well as storing images during acquisition. All of these components were housed in a watertight canister that was tethered to the surface via a cable supplying power and communication. The MSCAM design is almost identical to the NURADS camera [17] except the latter used a fish-eye lens and dome port to acquire hemispherical images. The MSCAM full field of view was only about 45 degrees.

Recent literature on the hyperspectral reflectance of biotic organisms and substrates associated with coral reefs [7-14] was

consulted for guidance the spectral bands considered most useful for optical seabed classification. In Fig. 1 each row of points corresponds to the bands used by a particular study. Solid points mark band centers, while squares and diamonds mark the locations of derivatives (see legend for details). Only six filters fit in the MSCAM at once, so two sets of six filters were used; these are marked by the vertical blue and red rectangles. Bands centered at 546, 568, and 589 nm were used in both filter sets, however.

A full system calibration was performed (see [17]), but the only correction used in these experiments was for the camera lens rolloff, defined as the relative response of the imaging system as a function of location within the field of view. Most cameras have maximum response at the principal point and decreased response towards the edges of the field of view; the MSCAM rolloff was typical in this regard (Fig. 2).

Four datasets were acquired, two in the field and two in saltwater tanks at the University of Miami (Table I). During data acquisition the camera rested on a tripod in a downward looking configuration and acquired multiple sets of images with each of the six filters. A panel of spectralon (LabSphere, Inc.) with 20 % lambertian reflectance was placed in the camera field of view during data acquisition.

For each scene imaged, several complete sets of light (shutter open) and dark (shutter closed) images were acquired for each filter. The AP47p has 1024 x 1024 pixel resolution, but a bin factor of two was used in all case, so the resulting images are all 512 x 512 pixels.

After data acquisition with the MSCAM, images of each scene were acquired with a Sony P-93 in order to have traditional three-band (RGB) underwater images for comparison. Images with the Sony camera were also taken from the tripod in order to maintain the same distance to the seabed.

Figure 1. Proposed spectral bands for coral reef mapping. See text for

description.

Page 3: Automated classification of underwater multispectral … classification of underwater multispectral imagery for coral reef monitoring A. C. R. Gleason, R. P. Reid Division of Marine

Figure 2. System rolloff factor as a function of angle from the center pixel. For

clarity, only two of the six bands are shown: 487.1 nm (sold) and 589.0 nm (dashed). The response of the other bands falls between these extremes.

B. Data Processing

The first step in data processing was to correct the raw image digital numbers (DNs) for the CCD dark current and sytem rolloff factor (1).

DNc = (DNL - DND) / Roff. (1)

DNL and DND are the digital numbers in the light and dark

current images, respectively. Roff is the correction for system rolloff (Fig. 2). DNc are the corrected digital numbers. The corrected DNc were then converted to reflectance (R) by dividing all pixels in the image by DNcs, the average corrected digital numbers observed over the spectralon panel (2). All calculations in (1) and (2) are performed on a pixel-by-pixel basis for each band, so all terms except for DNcs are functions of row, column, and wavelength.

R = DNc / DNcs . (2)

TABLE I: IMAGE ACQUISITION SUMMARY

Shorthand Location Lighting N Tank 1 UM Tank Ambient 1 Grecian 25.10603 N / 80.31810 W Ambient 11 Andros 24.71900 N / 77.77012 W Ambient 16 Tank 2 UM Tank Artificial 5

Shorthand is the name used to refer to the dataset within the text. Location is the decimal latitude / longitude of the dataset in degrees or "UM Tank" for a saltwater flow though tank on the University of Miami / RSMAS campus. "Ambient" lighting was provided by the sun; "Artificial" lighting was provided by two Deep Sea Power and Light 500 W MultiSeaLite halogen lamps. Images with artificial lighting were acquired at night. N is the number of image sets (all six filters) that were averaged to create the single image used for analysis.

The average reflectance image (RA) for each band was then computed as the mean of all (N) images acquired in each dataset (see Table I). The normalized difference ratio of bands centered at 568 and 546 nm (ND568-546) was then computed from the averaged reflectance values (3).

ND568-546 = ( RA,568 - RA,546 )/( RA,568 + RA,546 ). (3)

A threshold for ND568-546 was chosen interactively to

segment pixels containing coral and algae from "background" pixels containing anything else (Table II). Photosynthetic organisms such as corals and algae tend to have higher values of ND568-546 than other organisms, sediments, and bare rock. All pixels below the threshold were set to zero.

The grey level co-occurance matrix (GLCM) algorithm provided with MATLAB (R2006b; The MathWorks, Inc. Natick, MA) was used to compute four texture metrics from the thresholded ND568-546 image. The MATLAB image processing toolbox documentation 2 contains details on the calculations and provides two references [18, 19]. In addition, descriptions of the algorithm are available in many image processing textbooks (e.g. [20]), so only a brief overview is provided here.

The approach was first to divide each thresholded ND568-546 image into a number of small blocks and then to compute the GLCM and four derivative texture metrics for each block. An eight by eight pixel block size was used, resulting in texture images with an effective resolution of 64 by 64 pixels because the texture measures returned for each block are scalars. In practice, all of the pixels in the original block were assigned the same output value in order to maintain the original image resolution (512 by 512).

The four texture metrics output from the MATLAB GLCM implementation are: "Contrast", a measure of the average intensity differences between neighboring pixels; "Correlation", a measure of the average correlation between neighboring pixels; "Energy", a measure of how many different combinations of neighboring grey level values exist in the image; and "Homogeneity", a measure of how many pixels have the same value as their neighbor. Visual inspection of the texture metric images from each of the four datasets indicated that algae and coral had virtually identical values of "Contrast", but typically they had different values for the other three texture metrics. Therefore, "Contrast" images were omitted from the rest of the analysis, and thresholds were visually established for the other three texture metrics at a level that maximized the discrimination between coral and algae (Table II).

2http://www.mathworks.com/access/helpdesk/help/pdf_doc/images/images_tb.pdf

Page 4: Automated classification of underwater multispectral … classification of underwater multispectral imagery for coral reef monitoring A. C. R. Gleason, R. P. Reid Division of Marine

TABLE II: THRESHOLDS USED FOR EACH ALGORITHM AND DATASET

Thresholds Spectral Grey Level Co-occurrence Metrics Dataset ND568-546 Correlation Energy Homogeneity Tank 1 0.10 0.37 + 0.12 * 0.64 * Grecian 0.05 0.75 + 0.05 * 0.63 + Andros 0.08 0.35 + 0.20 * 0.50 + Tank 2 0.05 0.35 + 0.20 * 0.68 *

Thresholds are all unitless. Pixels with values less than the spectral threshold were masked to a zero value (background). Pixels with values greater than the spectral threshold were then assigned to coral or algae classes based on their value relative to each of the GLCM thresholds. Values less than the GLCM thresholds marked with an asterisk (*) were assigned to the coral class and values greater than the GLCM thresholds marked with an asterisk (*) were assigned to the algae class. The reverse is true for GLCM thresholds marked with a cross (+).

The final step in processing was to smooth the output.

Following the GLCM thresholding step, every pixel had been placed in one of three classes: coral, algae, and background. Furthermore, each dataset had three classified images: one based on each of the GLCM algorithms. The goal of the smoothing step was to consolidate the three classified images into a single classified image. A sliding window and majority filter were used for the smoothing. The window size was 24 by 24 pixels, corresponding to a 3 by 3 set of blocks submitted to the GLCM algorithm. For each pixel in the image, a 24 by 24 block surrounding that pixel was extracted from each of the three GLCM classifications, and the single output was assigned to the class with the most pixels in that 1728 pixel (24 x 24 x 3) vector.

C. Accuracy Assessment

The accuracy of the classified images was assessed by point counting the original input images. For each of the datasets, an analyst used CPCe software [1] to place 400 random points over a false color image created from three of the six MSCAM bands. The analyst then classified each of the points as coral, algae or background.

Results from point counting were compared with the classified images in two ways. First, for an image-wide comparison, the percentage of points identified by the analyst as coral, algae and background was compared to the percentage of pixels classified by the spectral / texture algorithm in those classes. For each of the three classes (C), accuracy (AC) for that class was then computed following (4), where PI,C is the percentage of pixels in the image classified in class C, and PA,C is the percentage of points identified by the analyst as class C.

AC = 1 - (PI,C - PA,C ) / PA,C . (4)

Second, for a point-based comparison, the class for each of

the 400 points identified by the analyst was extracted from the corresponding pixel in the classified image. An error matrix was then constructed for each dataset [21] and the overall accuracy was computed following (5), where AO is the total overall accuracy, NC is the number of points that were

correctly classified in class (C), and NT is the total number of points counted (400). The analysis was repeated both on the final classified image, which had coral, algae, and background classes, as well as on the thresholded ND568-546 image, which is also a simple classification with just two classes: coral and algae combined, and background.

AO = Σ NC / NT . (5)

D. Alternate Processing

Two alternate processing methods were employed for comparison with the ND568-546 / GLCM algorithm described above. First, the "select / color range" tool in Photoshop software (v. 9.0; Adobe Systems Inc., San Jose, CA) was used to interactively color match pixels identified by an analyst as corals. Second, the GLCM texture metrics were calculated without thresholding the ND568-546 input image.

Analysis of the products produced with these alternate processing methods required simply qualitative comparison with the full results.

III. RESULTS

An example of the raw data, processing flow, and result is shown for the Grecian dataset in Fig. 3. The raw data, ND568-546 images, and resulting classified images are shown for all four datasets in Fig. 4.

The classified images agree well with the point count data when considering image-wide accuracy, which assesses estimates of the total percentage of each class present in the scene. The image-wide accuracy, defined by (4), was greater than 70% for 11 of 12 classes and greater than 75% for 9 of 12 classes (Table III).

The classified images also agree well with the point count data when considered on a point-by-point basis. The overall accuracies, as computed from error matrices using (5), were greater than 80% for three of the four datasets (Table IV).

Interactive color matching worked well on individual corals, but did not work well segmenting multiple corals in the same image, especially when they were different species. Fig. 5 shows an example for the Tank 1 dataset in which pixels on the coral colony in the upper right were selected for color matching. Note that most of the pixels on this particular colony were segmented well from the background, but most of the pixels on the other coral colonies in the image were not selected.

TABLE III:

IMAGE-WIDE ACCURACY

Dataset Coral Algae Background Tank 1 74 % 71 % 98 % Grecian 77 % 96 % 77 % Andros 78 % 11 % 81 % Tank 2 94 % 93 % 98 %

Image-wide accuracy was computed using (4) for each dataset.

Page 5: Automated classification of underwater multispectral … classification of underwater multispectral imagery for coral reef monitoring A. C. R. Gleason, R. P. Reid Division of Marine

Figure 3. Example of processing for Grecian dataset. A: False color image composed of three multispectral camera bands (589, 548, 487 nm) displayed in RGB. B: Normalized difference ratio of bands at 568 and 546 nm with values below the spectral threshold in Table II set to zero. C-E) GLCM texture images for

correlation, energy, and homogeneity, respectively. F-H) Classified images computed by applying thresholds in Table II to the GLCM texture images (C-E). I: The final classified image created by smoothing images (F-H) with a majority filter. Colors in (F-I) are: coral (red), algae (green), background (black).

Page 6: Automated classification of underwater multispectral … classification of underwater multispectral imagery for coral reef monitoring A. C. R. Gleason, R. P. Reid Division of Marine

Figure 4. Summary of results. Left column: False color images composed of three multispectral camera bands (589, 548, 487 nm) displayed in RGB. Center

column: Normalized difference ratio of bands at 568 and 546 nm with values below the spectral threshold in Table II set to zero. Right Column: Final smoothed classified images with three classes: coral (red), algae (green), background (black). The datasets are placed in rows in the same order as Tables I-IV: Tank 1,

Grecian, Andros, Tank 2, from top to bottom.

Page 7: Automated classification of underwater multispectral … classification of underwater multispectral imagery for coral reef monitoring A. C. R. Gleason, R. P. Reid Division of Marine

TABLE IV:

POINT-BASED OVERALL ACCURACY

Dataset Two classes: C+A, and B Three classes: C, A, and B Tank 1 93 % 86 % Grecian 71 % 67 % Andros 84 % 84 % Tank 2 87 % 87 %

Overall accuracy was computed using (5) for each dataset. The analysis was performed on both the final classified image with three classes: coral (C), algae (A), and background (B), as well as on the intermediate thresholded ND568-546 image, which had just two classes: coral and algae combined (C+A), and background.

Figure 5. Sample interactive color matching results from the Tank 1 dataset. Left: RGB image from a Sony P-93 RGB underwater camera. Right: Color matching from interactively picking pixels on the Siderastrea siderea coral

colony in the upper right of the image.

Figure 6. Comparison of texture classification with and without spectral thresholding. GLCM Homogeneity texture images (left) and resulting

classifications (right) for the Tank 1 dataset. Top row shows calculations with no ND568-546 threshold applied. Bottom row shows standard processing.

GLCM texture metrics calculated from the ND568-546 image

without first thresholding gave no indication that GLCM texture can be used to segment background from coral and algae. Texture values in areas of the background were very similar to those found over coral and algae (Fig. 6, upper left).

IV. DISCUSSION

As mentioned in the introduction, initial experiments testing whether the spectral bands suggested in [7-14] could be used for automated classification of underwater imagery were not successful. The results presented above, however, indicate that high spectral resolution may nevertheless be useful for underwater image classification because such data can greatly simplify subsequent processing by a texture classification algorithm.

High spectral resolution is an improvement over coarse spectral bands for classifying underwater imagery; the evidence for this is the ability of ND568-546 to more accurately segment out coral + algae than color matching was able to do (Fig. 5).

Narrow spectral bands alone are not a complete solution for automated underwater classification of reef images, however. Segmenting coral and algae, together as one class, from the background is not a useful end in itself. On the other hand, segmenting coral and algae with a narrow spectral band ratio was useful in this study when combined with image texture measures. The evidence for this was that thresholds of texture images computed with the GLCM algorithm were able to separate coral and algae after the background had been identified with the narrow band ratio, but they could not do so without the prior application of the narrow band ratio (Fig. 6).

The narrow band normalized difference ratio ND568-546 segments coral and algae together as one class, generally with accuracy greater than 80% (Table IV, two classes). A simple texture measure separates them into two classes with only a small drop in overall accuracy (Table IV, three classes).

The approach followed here was semi-automated in the sense that some user intervention was required to select the four thresholds that need tuning for each dataset. Although a fully automated system would be ideal, the semi-automated approach was still faster than point counting, and there are at least three reasons to expect additional progress could be made toward a fully automated system.

First, the range of thresholds to search was fairly small. The thresholds for ND568-546, GLCM Correlation, and GLCM Homogeneity varied by about a factor of two. It would be easy to create an interface to facilitate threshold picking so that many images could be analyzed quickly.

Second, no theoretical work has been performed on the texture properties of reef benthos that is particularly relevant for guiding the choice of GLCM parameters. This differs significantly from the situation for spectral parameters, for which a number of previous studies have suggested that wavelength regions around 546 and 568 nm were good for discriminating corals, algae, and substrate. Upon investigation, it may be possible to find better values for the GLCM block size, offset, and smoothing parameters than the nominal values used above.

Third, the GLCM algorithm was chosen simply for convenience due to its existing implementation in MATLAB. Better results may be expected with more sophisticated texture

Page 8: Automated classification of underwater multispectral … classification of underwater multispectral imagery for coral reef monitoring A. C. R. Gleason, R. P. Reid Division of Marine

algorithms, such as those being pursued by [4-6], but the point is that regardless of the texture algorithm used, acquiring data in narrow spectral bands is likely to improve the results.

V. CONCLUSION

High spectral resolution was an improvement over coarse spectral bands for classifying underwater imagery, but narrow spectral bands alone were not a complete solution for automated classification of reef images. The important result was that acquiring underwater imagery in narrow spectral bands and pre-processing the data with a band ratio greatly simplified texture classification. Texture classification became easy enough, in fact, that the GLCM approach, which on its own was not useful for segmenting these images, was found to produce reasonable results when combined with ND568-546.

The potential for full automation of underwater high spectral resolution imagery is promising and will benefit from three future areas of research: 1) careful correction for scene geometry, 2) combination of spectral and more sophisticated texture algorithms, 3) additional research on the texture properties of corals, algae, other organisms and the substrate associated with coral reef environments.

ACKNOWLEDGMENT

Research in the Florida Keys National Marine Sanctuary was conducted under permit FKNMS-2004-010. Field support was provided by Capt. M. Birns (NURC), T. Szlyk (AUTEC), and D. Lirman, B. Gintert (U. Miami / RSMAS). W. Cooper and D. Lirman (U. Miami / RSMAS) enabled the salt water tank experiments.

REFERENCES [1] K. E. Kohler and S. M. Gill, "Coral Point Count with Excel

extensions (CPCe): A Visual Basic program for the determination of coral and substrate coverage using random point count methodology," Comput. Geosci., vol. 32, pp. 1259-1269, 2006.

[2] S. P. Bernhardt and L. R. Griffing, "An Evaluation of Image Analysis at Benthic Sites Based on Color Segmentation," Bull. Mar. Sci., vol. 69, pp. 639-653, 2001.

[3] C. H. Mazel, M. P. Strand, M. P. Lesser, M. P. Crosby, B. Coles, and A. J. Nevis, "High-resolution determination of coral reef bottom cover from multispectral fluorescence laser line scan imagery," Limnol. Oceanogr., vol. 48, pp. 522-534, 2003.

[4] O. Pizarro, S. Williams, I. Mahon, P. Rigby, M. Johnson-Roberson, and C. Roux, "Advances in Underwater Surveying and Modelling from Robotic Vehicles on the Great Barrier Reef," EOS Trans. AGU Ocean Sci. Meet. Suppl., Abstract OS15I-08, vol. 87, 2006.

[5] T. H. Konotchick, D. I. Kline, D. B. Allison, N. Knowlton, S. J. Belongie, D. J. Kriegman, G. W. Cottrell, A. D. Bickett, N. Butko, C. Taylor, and B. G. Mitchell, "Computer Vision Technology Applications for Assessing Coral Reef Health," EOS Trans. AGU Ocean Sci. Meet. Suppl., Abstract OS15I-06, vol. 87, 2006.

[6] A. Mehta, E. Ribeiro, J. Gilner, and R. van Woesik, "Coral Reef Texture Classification Using Support Vector Machines," presented at International Conference of Computer Vision Theory and Applications - VISAPP, Barcelona, Spain, 2007.

[7] E. J. Hochberg and M. J. Atkinson, "Spectral discrimination of coral reef benthic communities," Coral Reefs, vol. 19, pp. 164-171, 2000.

[8] E. J. Hochberg and M. J. Atkinson, "Capabilities of remote sensors to classify coral, algae, and sand as pure and mixed spectra," Remote Sens. Environ., vol. 85, pp. 174-189, 2003.

[9] E. J. Hochberg, M. J. Atkinson, and S. Andrefouet, "Spectral reflectance of coral reef bottom-types worldwide and implications for coral reef remote sensing," Remote Sens. Environ., vol. 85, pp. 159-173, 2003.

[10] C. D. Clark, P. J. Mumby, J. R. M. Chisholm, J. Jaubert, and S. Andrefouet, "Spectral discrimination of coral mortality states following a severe bleaching event," Int. J. Remote Sens., vol. 21, pp. 2321-2327, 2000.

[11] H. Holden and E. LeDrew, "Spectral Discrimination of Healthy and Non-Healthy Corals Based on Cluster Analysis, Principal Components Analysis, and Derivative Spectroscopy," Remote Sens. Environ., vol. 65, pp. 217-224, 1998.

[12] H. Holden and E. LeDrew, "Hyperspectral identification of coral reef features," Int. J. Remote Sens., vol. 20, pp. 2545-2563, 1999.

[13] H. Holden and E. LeDrew, "Hyperspectral discrimination of healthy versus stressed corals using in situ reflectance," J. Coast. Res., vol. 17, pp. 850-858, 2001.

[14] H. Holden and E. LeDrew, "Measuring and modeling water column effects on hyperspectral reflectance in a coral reef environment," Remote Sens. Environ., vol. 81, pp. 300-308, 2002.

[15] C. D. Mobley, L. K. Sundman, C. O. Davis, T. V. Downes, R. A. Leathers, M. Montes, J. H. Bowles, W. P. Bissett, D. D. R. Kohler, R. P. Reid, E. M. Louchard, and A. C. R. Gleason, "A Spectrum-Matching and Look-up-table Approach to Interpretation of Hyperspectral Remote-sensing Data," Appl. Optics, vol. 44, pp. 3576-3592, 2004.

[16] E. M. Louchard, R. P. Reid, F. C. Stephens, C. O. Davis, R. A. Leathers, and T. V. Downes, "Optical remote sensing of benthic habitats and bathymetry in coastal environments at Lee Stocking Island, Bahamas: A comparative spectral classification approach," Limnol. Oceanogr., vol. 48, pp. 511-521, 2003.

[17] K. J. Voss and A. L. Chapin, "Upwelling radiance distribution camera system, NURADS," Opt. Express, vol. 13, pp. 4250-4262, 2005.

[18] R. M. Haralick and L. G. Shapiro, Computer and Robot Vision: Vol. 1: Addison-Wesley, 1992.

[19] R. M. Haralick, Shanmuga.K, and I. Dinstein, "Textural Features for Image Classification," IEEE T. Syst. Man. Cyb., vol. 3, pp. 610-621, 1973.

[20] R. Jain, R. Kasturi, and B. G. Schunck, Machine Vision. Boston: McGraw Hill, 1995.

[21] R. G. Congalton and K. Green, Assessing the Accuracy of Remotely Sensed Data: Principles and Practices. Boca Raton: Lewis Publishers, 1999.