[ieee 2006 international conference on advances in space technologies - islamabad, pakistan...

6
179 Digital Image Processing of High Resolution Aerial Photograph of Shallow Marine Sanctuary, Victoria, Australia Fawad Saeed Water and Power Development Authority (WAPDA) E- mail: [email protected] Abstract-In this paper the potential of Remote Sensing as a tool to study Marine Sanctuary has been explored. There are two main methods that can be used to extract information from the aerial photograph / satellite imagery; visual interpretation and digital image processing. Visual interpretation takes advantage of the human skills to recognize data "content" by combining several elements of image interpretation. It relies on experience, a prior knowledge and skilled analysts qualities to produce excellent results. Alternatively, digital image classification employs computer techniques which are mostly based on the reflection values of individual pixels and use statistical pattern recognition methods. In the current study, the main area of interest was distribution of sand, rocky reef, rocky rubble and sea grass in the sanctuary. The study showed that the supervised classification approach was a superior technique to employ for such studies as compare to other traditional approaches. Here, the higher degree of interaction between the analyst and the machine complements the limitations of each other. Furthermore, the analyst is also given the opportunity to "control" which digital signatures best qualify to represent a certain resource class considering the fact that the analyst has access to ground truth data to fine tune the classification. I. INTRODUCTION In 2002, the Victorian government declared a system of Marine National parks and Marine Sanctuaries. These areas were intended to protect and promote biodiversity within Victoria’s unique marine environment. Marine sanctuaries are very small and normally designed to protect specific feature where as the larger Marine National Parks are designed to protect much larger area. Rickett’s point is one of such three marine sanctuaries. Here a high resolution aerial imagery was digitally processed with an aim to identify and differentiate among sand, rocky reef, rocky rubble and sea grass which shall be helpful to develop a detailed habitat map of Rickett’s point sanctuary for future management and environmental monitoring. Fig 1. Site Map of the Rickett’s Point Sanctuary The sanctuary is extremely shallow with maximum depth less than five meters. The paper shall focus on pre- classification processing techniques (image enhancement / stretching in RGB bands, ratioing) and the digital analysis (principal component analysis, supervised classification and unsupervised classification). The level of success obtained in differentiating sand, rocky reef, rocky rubble and sea grass shall be verified through generation of error matrix. II. BACKGROUND A. The study area The Rickett’s Point is located along the suburban shoreline of the city of Melbourne, Australia (fig.1). It is on the fault bounded eastern shore of Port Phillip Bay with geology of indurated tertiary sandstone. This gives rise to high profile reef with only small areas of sea grass. III. METHODOLOGY A. Selection of the Remotely-Sensed Data Today most of the remotely sensed imageries are available in digital format and it has made the information extraction through computers a standard practice. In this study, high quality, cloud-free and high resolution aerial photograph in 1-4244-0514-9/06/$20.00 ©2006 IEEE

Upload: fawad

Post on 09-Feb-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE 2006 International Conference on Advances in Space Technologies - Islamabad, Pakistan (2006.09.2-2006.09.3)] 2006 International Conference on Advances in Space Technologies -

179

Digital Image Processing of High Resolution Aerial Photograph of Shallow Marine Sanctuary, Victoria,

Australia

Fawad Saeed Water and Power Development Authority (WAPDA)

E- mail: [email protected]

Abstract-In this paper the potential of Remote Sensing as a tool to study Marine Sanctuary has been explored. There are two main methods that can be used to extract information from the aerial photograph / satellite imagery; visual interpretation and digital image processing. Visual interpretation takes advantage of the human skills to recognize data "content" by combining several elements of image interpretation. It relies on experience, a prior knowledge and skilled analysts qualities to produce excellent results. Alternatively, digital image classification employs computer techniques which are mostly based on the reflection values of individual pixels and use statistical pattern recognition methods. In the current study, the main area of interest was distribution of sand, rocky reef, rocky rubble and sea grass in the sanctuary. The study showed that the supervised classification approach was a superior technique to employ for such studies as compare to other traditional approaches. Here, the higher degree of interaction between the analyst and the machine complements the limitations of each other. Furthermore, the analyst is also given the opportunity to "control" which digital signatures best qualify to represent a certain resource class considering the fact that the analyst has access to ground truth data to fine tune the classification.

I. INTRODUCTION

In 2002, the Victorian government declared a system of

Marine National parks and Marine Sanctuaries. These areas were intended to protect and promote biodiversity within Victoria’s unique marine environment. Marine sanctuaries are very small and normally designed to protect specific feature where as the larger Marine National Parks are designed to protect much larger area. Rickett’s point is one of such three marine sanctuaries. Here a high resolution aerial imagery was digitally processed with an aim to identify and differentiate among sand, rocky reef, rocky rubble and sea grass which shall be helpful to develop a detailed habitat map of Rickett’s point sanctuary for future management and environmental monitoring.

Fig 1. Site Map of the Rickett’s Point Sanctuary

The sanctuary is extremely shallow with maximum depth less than five meters. The paper shall focus on pre-classification processing techniques (image enhancement / stretching in RGB bands, ratioing) and the digital analysis (principal component analysis, supervised classification and unsupervised classification). The level of success obtained in differentiating sand, rocky reef, rocky rubble and sea grass shall be verified through generation of error matrix.

II. BACKGROUND

A. The study area

The Rickett’s Point is located along the suburban shoreline of the city of Melbourne, Australia (fig.1). It is on the fault bounded eastern shore of Port Phillip Bay with geology of indurated tertiary sandstone. This gives rise to high profile reef with only small areas of sea grass.

III. METHODOLOGY

A. Selection of the Remotely-Sensed Data Today most of the remotely sensed imageries are available

in digital format and it has made the information extraction through computers a standard practice. In this study, high quality, cloud-free and high resolution aerial photograph in

1-4244-0514-9/06/$20.00 ©2006 IEEE

Page 2: [IEEE 2006 International Conference on Advances in Space Technologies - Islamabad, Pakistan (2006.09.2-2006.09.3)] 2006 International Conference on Advances in Space Technologies -

180

Fig 2. Raw unmasked image

RGB bands (fig.2) has been used and analyzed visually and digitally. MicroImage TNTmips 6.9 software was utilized for digital analysis.

B. Image Analysis System Following steps were taken for the analysis of the image

during this study. a) Pre-Processing

Rectification and reprojection of satellite imagery to a standard coordinate system is performed in the project. Geometric correction has been done for geometric distortion due to Earth's rotation and other imaging conditions (such as oblique viewing). The image is also transformed to conform to Transverse Projections with Spheroid: Australia. In the given image, the land areas are relatively bright compared to water body. This high contrast limits the amount of enhancement that can be applied to the image as a whole. That is why land and water region has been separated prior to other image processing steps via masking technique. b) Image Enhancement

For visual interpretation, the image was displayed one band at a time as well as in combination of three bands at a time (as a colour composite image) to produce "true colour". Image has also been enhanced to facilitate visual interpretation or classified to produce a digital thematic map. In this way, the colours of the resulting colour composite image resemble closely what would be observed by the human eyes [13;6].

There are several techniques which one can use to enhance an image, such as Contrast Stretching and Spatial Filtering. These techniques have been utilized for a better differentiation among various features.

c) Contrast Stretching

Visual interpretation is often difficult based on the raw image data because the brightness values are concentrated in a narrow range rather than being spread out over the entire gray scale range from 0 to 255 [14]. To overcome this limitation contrast stretch was utilized.

As the term implies, rescaling of the horizontal axis of the frequency distribution of brightness values was made use of so that the full range could be exploited. The input histograms show that most pixel values in the three rasters fall in the lower part of the data range (fig.3), and none of these had a significant number of pixels with high values. One can therefore improve the image appearance by lowering the maximum input limit for each color component. Lowering these limits by roughly similar amounts shifts the display means to higher values, which brightens the image without adversely affecting the color balance. Along with linear stretch, non-linear stretch has also been tried but non-linear stretched image was visually less elaborative as compare to linearly stretched image. The following image (fig.4) illustrates effect of the linear stretching. Note that the minimum digital number for each band is not zero. Usually maximum digital number of each band is also not 255. The image below show the result of linear stretching on the contrast in the composite image before (Appendix fig.5) and after (Appendix fig.6) in colour composite form.

The linear stretch enabled differentiation of various features (sand and rocky reef and grass), which in the raw image is not very prominent. d) Spatial Filtering

Spatial filters are designed to highlight or suppress features in an image based on their spatial frequency. The spatial frequency is related to the textural characteristics of an image. Rapid variations in brightness levels ('roughness') reflect a high spatial frequency; 'smooth' areas with little variation in brightness level or tone are characterized by a low spatial frequency.

Further both low pass (Appendix fig.7) as well as the high pass filters (Appendix fig.8) along with noise reducing filters (Appendix fig.9) has been tried to improve visual interpretation and a better feature classification but as shown, it did not produced desired effect. Subsequently for the feature classification, unfiltered data was utilized. Filtration for individual bands has also given non-interpretable results. However, still it is possible to extract additional information from digital images through number of other image processing techniques such as: False Color Composites, Image Ratios, and Principle Components Analysis. The intention of these procedures is to make it easier for the image analyst to interpret the area or phenomenon being studied. In the current research, since image is composed of only RGB colours (Band1, Band2 and Band3), consequently false colour composite could not be utilized. e) Bands Ratioing

Band Ratioing is probably the most common arithmetic operation that is most widely applied to images in geological, ecological and agricultural applications of remote sensing. Ratio images are enhancements resulting from the division of digital number (DN) of one spectral band by corresponding DN of another band. This could iron out differences in scene illumination due to cloud or topographic shadow. Ratio

Page 3: [IEEE 2006 International Conference on Advances in Space Technologies - Islamabad, Pakistan (2006.09.2-2006.09.3)] 2006 International Conference on Advances in Space Technologies -

181

images also bring out spectral variation in different target materials. Interpretation of ratio images must consider that they are "intensity blind", i.e, dissimilar materials with different absolute reflectance but similar relative reflectance in the two or more utilized bands will look the same in the output image.

In our case band ratio B2/B1 (Marine Vegetation Ratio) has been used but not very distinctive results were obtained except very clear and bright sand and light grey shades of possible grass or rocky reef, but it is the same results which were visible in raw image too.

Further Marine NDVI [7] has also been tested to study sea grass but surprisingly, it showed nothing and the image has lost all of its spectral information during the process otherwise it is very common to calculate certain indices which can enhance vegetation or geology in image processing. It might be due to very shallow bottom of the water body and presence of rocks with algae which didn’t render very distinct digital signatures to features. Also NDVI appears to be a poor indicator if the vegetation cover is thin [4]. f) Principle Components Analysis

Principal Components Analysis (PCA) was also tried which is a multivariate statistical technique that attempts to group together highly correlated variables into a single index. PCA is primarily used to determine the underlying information of highly correlated spectral information [9]. Different bands in multispectral images have similar visual appearances since reflectance for the same surface cover types are almost equal. PCA is a statistical procedure designed to reduce the data redundancy [12] and put as much information from the image bands into fewest number of components.

The first principal component is the direction of greatest spread (variance) in the data. The second component is the direction perpendicular to the first with the next largest variance. The other components are determined by the requirement of mutually perpendicular axes. The transformation of raw remote sensor data using PCA resulted in new principal component images that were more meaningful than the original data however; it was still difficult to differentiate between sand, rocky reef and sea grass (Appendix fig.10, fig.11 and fig.12). Following statistics were obtained from the PCA.

TABLE 1 EIGENVALUES AND ASSOCIATED %GES

PCA 1 2 3

Eigen value 1405.2 24.7 6.6

%age Eigen 97.8 1.7 0.4

Cummulative 97.8 99.5 100.0

TABLE 2 EIGENVECTORS (FACTOR SCORES) COMPUTED FOR THE COVARIANCE MATRIX

PCA 1 2 3

Band 1 0.67 0.39 0.41 Band 2 -0.56 0.09 -0.10 Band 3 0.46 -0.77 -0.04

TABLE 3 DEGREE OF CORRELATION BETWEEN INPUT RASTER BAND AND EACH

PRINCIPAL COMPONENT

PCA 1 2 3

Band 1 0.96 0.26 0.06

Band 2 0.99 0.02 -0.08

Band 3 0.99 -0.11 0.04

TABLE 4

TRANSFORMATION MATRIX PCA 1 2 3

Band 1 0.23 0.55 0.25

Band 2 0.36 0.06 -0.46

Band 3 0.40 -0.38 0.28

Table 1 shows the Eigen values for each component: the

first principal component accounts for 97.81 % of the variance in the entire multispectral dataset. Component 2 accounts for 1.72% of the remaining variance. The third component accounts for 0.46% bringing a total of 100% of the variance by the three components. It is now possible to determine how each band loads are associated with principal component by determining degree of correlation. The calculation of correlation results in a new n x n matrix filled with factor loadings as shown in Table 3. The principal components 1,2,3 should prove useful when displayed because these represent the most of the information present in all bands but in the current research only B1, B2 and B3 bands were used, and that might be a reason that PCA did not give significantly different results than the earlier techniques.

Usually for PC1, Band1 contains the most common information but here the highest correlations for principal Component 1 are for all bands 1,2 and 3 (0.9624,0.9961, 0.9928, respectively, Table 3). This can also means that the three components were not giving distinctive results. PCA colour composite is much more informative, but still it is very difficult to make distinction in sea grass and rocky reef (Appendix fig.13). g) Decorrelation Stretch

Decorrelation stretching is a process which enhances the color display of highly correlated raster sets, such as the first three Landsat Thematic Mapper bands. The process performs a principal component transformation on the set of input bands, applies a contrast stretch to the components, then reverses the transformation. When the output rasters are displayed in RGB, hue and intensity are usually similar to the original image, but the color saturation is greatly increased. This enhancement exaggerates the differences in spectral properties between surface materials to a greater degree than is possible using conventional contrast enhancement of the original bands. As a result, one can more easily discriminate subtle variations in surface materials using the decorrelated raster set. But again, for this study, it did not assist much in differentiating various features (Appendix fig.14). It might be attributed to shallow water and close DN values for different

Page 4: [IEEE 2006 International Conference on Advances in Space Technologies - Islamabad, Pakistan (2006.09.2-2006.09.3)] 2006 International Conference on Advances in Space Technologies -

182

features and availability of only three bands image for digital processing. h) Image Classification

One of the most common uses of remotely sensed data is to classify each pixel in a scene as belonging to some group. In digital images it is possible to model this process, to some extent, by using two methods: Unsupervised Classifications and Supervised Classifications. The data are digitally extracted and spectrally characterized by the software. The analyst decides on the training sites and thus supervises the classification process. While there are a number of different algorithms available for performing supervised classification, here Maximum Likelihood classifier was used which takes the distribution of pixels in a training region into account when deciding how to group a pixel. Suppose all the pixels in one training region have very similar spectral values, like the sea grass region in our example. Imagine also that another training region contains pixels with a wide spread of values, like the rocky reef group in our example. If the pixel in question is located an equal distance from the means of each of these groups, it is more likely that the pixel belongs to the group with a larger variance in values of its training members. Thus, in a supervised classification we first identify the information classes which are then used to determine the spectral classes. Unsupervised Classifications is a computer method without direction from the analyst in which pixels with similar digital numbers are grouped together into spectral classes using statistical procedures such as nearest neighbour and cluster analysis [8]. The resulting image may then be interpreted by comparing the clusters produced with maps, aerial photographs, and other materials related to the image site ground truth data. If this information is not available, scientific reasoning may be used to group the various categories together into land use categories. In the current image analysis, four types of features were identified and are shown in Appendix (fig.15 and fig.16).

Selecting the Error Matrix option launches a classification error analysis, which uses a ground truth data with sample areas of known class to evaluate the accuracy of the digital classification.

Fig 17. Error Matrix Supervised classification for raw image The results are shown in the Error Matrix in fig.17 and

fig.18. Each row in the Error Matrix represents an output class and each column a ground truth class. The value in each matrix cell is the number of pixels (raster cells) with the corresponding combination of output class and ground truth class. For each cell on the leading diagonal of the matrix, the

output class equals the input class, so the values in these cells give the number of correctly classified pixels for each class. The values in off-diagonal matrix cells represent incorrectly classified pixels. The Overall Accuracy value is calculated by dividing the total number of correctly classified raster cells (the sum of the leading diagonal values) by the total number of cells in the ground truth raster, and expressing the result as a percentage. From the fig. 17 and fig.18, there seems not much difference in overall accuracy, but actually, supervised classification provided more accurate results as more accurate raster cells were recognized through this as confirmed by ground truth data.

Fig 18. Error Matrix Un-supervised classification for raw image

It should be kept in mind that the Error Matrix shows classification accuracy only relative to the set of classes that one provides. Low accuracy values for particular classes may indicate that the sample areas used were not completely representative of the class or that the class is not sufficiently different from other classes in its spectral properties or that set of classes does not include all of the significant spectral signatures in the scene. The Error Matrix shows two measures of accuracy for individual classes. The accuracy values for each column indicate the percentage of cells in that ground truth class that were correctly classified. Values less than 100% indicate errors of omission (ground truth cells omitted from the output class). This value is sometimes called the producer’s accuracy. Conversely, the accuracy values for each row show the percentage of sample cells in each output class that were correctly classified. Values less than 100% indicate errors of commission (cells incorrectly included in the output class). This value is sometimes termed the user’s accuracy [10].

However overall accuracy of 65.05% is still not satisfactory for supervised classification, which means that only 65.05% of the sample area cells in the training set raster were correctly classified by the Maximum Likelihood classifier. Other combination were also tried like using principal components in place of raw image, but the over all accuracy could not be improved (fig.19)

Fig 19. Error Matrix-Supervised classification for principal components

Page 5: [IEEE 2006 International Conference on Advances in Space Technologies - Islamabad, Pakistan (2006.09.2-2006.09.3)] 2006 International Conference on Advances in Space Technologies -

183

IV. DISCUSSION

The digital analysis of the image classified sand area quite well (bright white) due to its high reflection. There was also less confusion in the identification of the rocky reef rubble, however, the real task was to differentiate between the sea grass and the rocky reef. This could be attributed to shallow water and bottom reflectance. The pre-processing of image included geometric corrections, as well as masking water body and coastal area to calibrate reflectance. Some classical image analysis [3] were performed such as image enhancement, colour composites, ratios, filters, principal component analysis, unsupervised and supervised classifications. For unsupervised classification, Isodata and one pass clustering was used to provide a simple way to segment multispectral data using the data statistics. Also classic supervised classification like Maximum Likelihood, as well as filtering with noise reduction operators was tested. After a comparison with the ground truth data, and error matrix generation (fig.17), best results, both visual as well as statistical were obtained in raw image, supervised classification, followed by unsupervised classification, while principal component did not give very distinctive results.

Supervised classification seems more effective because of the presence of training sets to influence the results, especially when classes are only marginally separable such as sea grass and rocky reef. Supervised classification (fig.15) picked up water and sand very well, but mixed up sea grass and rocky reef to some extent, whereas unsupervised was not able to differentiate the rocky reef, sand and sea grass. Unsupervised classification also created some noise making it further difficult to differentiate among sea grass, rocky reef as well as between sand and rocky reef. A total of four classes were identified based on the supervised classification scheme. Table 5, below show the percentage distribution of each resource class in Maximum Likelihood.

TABLE 5 RESOURCE CLASSES FROM SUPERVISED CLASSIFICATION

Resource Class Cells % 1 Rocky reef 18603 11.28 2 Rocky reef rubble 140990 85.48 3 Sea grass 833 0.51 4 Sand 4509 2.73

For supervised classification, classes were created, to

differentiate various features. A total of 24 training sites (minimum 5 for each class, requirement of software to generate statistical report) were picked. Selection of sea grass and rocky reef was difficult because of very insignificant spectral contrast due to shallow water and shadow effect. For unsupervised classification, initially a set of four classes was selected based on ground truth data and visual interpretation of the image. If one compare original image with the, classified area; supervised classification is near to the ground truth data. However, a weakness of this classification was that some spectral classes encompass more than one informational class. For example, a part of rocky reef had the same DN value as sea grass. It also labelled some bare rocky reef as

sand (fig.15) because of white sea foam and breaking of waves over it. In order to deal with it, it might be better to increase the initial number of training sites.

To evaluate the results of the classification and to verify the degree to which the derived thematic map would meet users' needs, classification accuracy assessment was done through machine-assisted procedure (generating Error Matrix, based on ground truth data). To get a better idea of the broader classification accuracy, one may use a second set of ground truth areas that were not used in the training set.

Today, computers are used routinely to handle the large volume and quantitative nature of raster data. However, the images can still be usefully analyzed with classical techniques of photo interpretation [15]. By simply visually analyzing the images created using the different classification techniques, the method of supervised classification appears to be the better choice in this example. Though, human error, lack of knowledge of study area, and other factors all may effect quality of results in the supervised classification method, but still the supervised classification approach is the better technique to employ in this case since it allows a high degree of interaction between the analyst and the machine, there by complementing the shortfalls of each other. The analyst is also given the opportunity to "control" which digital signatures best qualify to represent a certain resource class considering the fact that one holds the reference ground truth data from which to base the fine decision for classification.

V. CONCLUSION

Looking at one’s own fields or familiar areas on the

imagery is the best way of gaining experience in imagery interpretation. The interpreter must keep in mind that several features can change with time, and this can affect how the features are recorded on the imagery such as sea grass. Additionally, the imagery is often collected under different atmospheric and sun angle conditions across a season and will not show the same features exactly the way they were shown previously. One way to recognize that this is happening is to look at features that should not change in a short turn-around time, such as gravel roads or trees (fully leafed out). This is real helpful incase one is interested in monitoring the health of some protected area such as a sanctuary. As Sullivan, [16] states, “visual interpretation requires knowledge of the viewed scene, if only to define the objects and events to be recognized.” From the observations, manual visual image interpretation was more useful or equally as useful as the Principal Component Analysis performed or Supervised or Unsupervised Classification. Similar results have also been seen in other related endeavors. Philipson and Hafker [11], also got comparable results when they analyzed their imagery visually (manual) and through computer (digital analysis). The current research, thus, demonstrates that the computer-based analysis alone does not always provide better results than what human being can perceive through the complex eye and brain combination, especially in coastal areas where water is

Page 6: [IEEE 2006 International Conference on Advances in Space Technologies - Islamabad, Pakistan (2006.09.2-2006.09.3)] 2006 International Conference on Advances in Space Technologies -

184

shallow and bottom reflection plays a tricky role in assigning distinct DN to different features.

ACKNOWLEDGEMENTS

The author is grateful for the advice and assistance given

by many people involved, especially Dr. Joe Leach and Mr. Ertan Yesilnacar, (PhD Student), Department of Geomatics , University of Melbourne, Australia, and all fellow research students of the Remote Sensing in the department.

REFERENCES

[1] Anderson, J. R., “A Land Use and Land Cover Classification System for Use with Remote Sensing Data”, US. Geological Survey Professional Paper 964. U. S. Gov. Printing Office, Washington, D. C., 1976.

[2] Avery, T.E. and G.L. Berlin, Fundamentals Of Remote Sensing and Airphoto Interpretation, Macmillan, 5th edition, New York, 1992, p 472.

[3] Drury S.A., Image Interpretation in Geology, Chapman and Hall, 2nd. edition, London , 1993.

[4] Huete, A.R. and R.D. Jackson, “Suitability of spectral indices for evaluating vegetation characteristics on arid rangelands”, Remote Sensing of Environment , 1987, 25: pp 295-309.

[5] Jensen, J. R., “Remote Sensing of the Environment”, Prentice Hall Publishing. USGS National Land Cover Characterization Project, 2002, http://landcover.usgs.gov/nationallandcover.html Retrieved on 04-05-2004.

[6] Jensen, J.R., “Introductory Digital Image Processing- A Remote Sensing Perspective”, 2nd Edition, Upper Saddle River, Prentice hall. New Jersey, 1996.

[7] Leach, J., “Lecture Notes on Remote Sensing”, Department of Geomatics, University of Melbourne, Melbourne, 2004.

[8] Lillesand, T. M. and R. W. Keiffer, “Remote Sensing and Image Interpretation”, Wiley and Sons, New York, 1994.

[9] Michener, W. K. and P. F. Houhoulis, “Detection of Vegetation Changes Associated with Extensive Flooding in a Forested Ecosystem”, Photogrammetric Engineering and Remote Sensing, 1997, 63 (12): pp 1363-1374.

[10] MicroImage (TNTmips-v.6.9-software), User Manual, Inc., 201 North 8th Street Lincoln, Nebraska 68508-1347 USA.

[11] Philipson, W. R. and W. R. Hafker, “Manual versus Digital Landsat Analysis for Delineating River Flooding”, Photogrammetric Engineering and Remote Sensing, 1981, 47 (9): pp 1351-1356.

[12] Ready, P.J. and P.A.Wintz, “Information extraction, SNR improvement and data compression in multispectral imagery”, IEEE Transaction on Communications, 1973, pp 1123-1133.

[13] Richards, J.A., Remote Sensing Digital Image Analysis: An Introduction, Second Edition, Berlin, Springer-Verlag, 1993, p 334.

[14] Robert, A., Remote Sensing: Models and Methods for Image Processing - Academic Press, San Diego, 1997.

[15] Short, N.M., The Landsat tutorial workbook. NASA Reference Publication, 1982, p 1073.

[16] Sullivan, G. D., “Visual Interpretation of known objects in constrained scenes”, Phil. Trans. R. Soc. Lon., B, 1992, 337: pp 361-370.

Appendix I

Fig 3. Raw Image Red Band Histogram

Fig 4. Linearly Stretched Red Band

Fig 5. Image(Before Linear Stretch)

Fig 6. Image After Linear Stretch

Fig 7. Low-Pass Filtered Image

Fig 8. Hi-Pass Filtered Image

Fig 9. Median Noise Reduction Filtered

Image

Fig 10. PC-1

Fig 11. PC-2

Fig 12. PC-3

Fig 13. PC-Colour

Composite

Fig 14. Decorrelation

Stretch

Fig 15. Supervised

Classification (raw image) Fig 16. Un-supervised

classification (raw image)