hyperspectral signal processing to identify land cover pattern.pdf

Upload: ashoka-vanjare

Post on 02-Jun-2018

223 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    1/78

    VISVESVARAYA TECHNOLOGICAL UNIVERSITY,

    Belgaum- 590016,Karnataka

    PROJECT REPORT

    On

    Hyperspectral Signal Processing to Identify Land Cover Pattern

    Submitted in partial fulfilment of the requirements for the award of the degree of

    Bachelor of Engineering

    in

    Electrical and Electronics

    by

    D.R.RAGHURAM 1AR09EE010

    MEGHANA SUDHINDRA 1AR09EE026

    SUSHANT KULKARNI 1AR09EE044

    Under the Guidance of

    Internal Guide: External Guide:

    Smt R.S.SHARMILA Dr S.N.OMKAR

    Assistant Professor Principal Research Scientist

    Dept of EEE,AIeMS Dept of AE,IISc

    Department of Electrical and Electronics Engineering

    AMRUTA INSTITUTE OF ENGINEERING AND MANAGEMENT SCIENCES

    Near Toyota Kirloskar, Bidadi Industrial Zone, Off Mysore Road, Bangalore - 562109

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    2/78

    Acknowledgements

    For the successful completion of this project, many people have taken time out of their busy

    schedules and helped us. We would like to acknowledge their help and contribution.

    We express our sincere gratitude to Dr A.Prabhakar, Principal, AIeMS and Prof H.L.Dinakar,

    Head of Department, Electrical & Electronics Engineering, for encouraging us to carry out

    the project at a premier institute.

    We are indebted to our internal guide, Smt. Sharmila R.S, Assistant Professor for her

    guidance, and constant feedback and support from the very beginning of this project.

    We wholeheartedly thank our external guide, Dr. S.N.Omkar, Principal Research Scien-

    tist, Department of Aerospace Engineering, Indian Institute of Science for giving us an

    opportunity to do our project on such an interesting and upcoming field of research.

    We would also like to thank our Project Co-ordinators, Smt. Sulochana Akkalkot, As-

    sistant Professorand Ms.Rashmi S, Lecturer, for their valuable suggestions and encourage-

    ment.

    We are grateful to J.Senthilnath, Doctoral Student, Department of Aerospace Engineer-

    ing, Indian Institute of Sciencefor his invaluable guidance and Ashoka Vanjare, Research

    Assistant, Department of Aerospace Engineering, Indian Institute of Science for his con-

    stant and generous help.We would also like to thankNikil Rao, BITS, Pilani, fellow member

    of our project group. We are also thankful to our friends in the Computational Intelligence

    Lab, Department of Aerospace Engineering, IISc and all the teaching and non-teaching staff at

    our college who have directly or indirectly contributed to the success completion of this project.

    Above all, we are thankful to our parents, whose tireless efforts have culminated in us

    reaching this juncture of life.

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    3/78

    Contents

    Pg.No

    List of Figures iii

    List of Tables iv

    Abstract v

    1 Introduction 1

    1.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    1.2 About hyperspectral signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    1.3 Geographical area under study . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    1.4 Outline of the Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    2 Literature Survey 7

    3 Image Acquisition 9

    3.1 Sensor Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    3.1.1 Scanner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    3.1.2 Imaging Optics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    3.1.3 Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    3.2 Mathematical Model of the Instrument. . . . . . . . . . . . . . . . . . . . . . . 12

    3.3 Hyperion Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    4 Spectral Profiles 14

    4.1 Water . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    4.2 Vegetation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    4.3 Land. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    4/78

    Pg.No

    4.4 Urban areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    5 Data Preparation 17

    5.1 Ground Truth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    6 Atmospheric correction 20

    6.1 Need for Atmospheric Correction . . . . . . . . . . . . . . . . . . . . . . . . . 20

    6.1.1 Reflectance v/s Radiance . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    6.2 Atmospheric Correction Approaches . . . . . . . . . . . . . . . . . . . . . . . . 22

    6.2.1 Scene Based Empirical Approach . . . . . . . . . . . . . . . . . . . . . . 23

    6.2.2 Radiation Transport Model Approach . . . . . . . . . . . . . . . . . . . . 23

    6.3 QUAC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    6.4 FLAASH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    6.5 Comparison of the various atmospheric techniques . . . . . . . . . . . . . . . . 26

    6.5.1 Explanation of the water spectral profile . . . . . . . . . . . . . . . . . . 27

    7 Dimensionality Reduction 29

    7.1 Principle Components Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    7.1.1 Mathematical Description . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    8 Classification Algorithms 35

    8.1 Methodology of classification . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    8.1.1 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    8.1.2 Confusion matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    8.2 Training Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

    8.3 Spectral Angle Mapper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

    8.4 Mahalanobis Distance Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . 42

    8.4.1 Mathematical Background. . . . . . . . . . . . . . . . . . . . . . . . . . 43

    8.5 Minimum Distance Classifier. . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

    8.5.1 Mathematical Background. . . . . . . . . . . . . . . . . . . . . . . . . . 45

    8.6 Maximum Likelihood Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 48

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    5/78

    8.7 K-Means Clustering Classification . . . . . . . . . . . . . . . . . . . . . . . . . 51

    8.7.1 Algorithm steps for k-means clustering . . . . . . . . . . . . . . . . . . . 52

    9 Results and Conclusions 54

    9.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

    9.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

    10 Future Work 60

    References 62

    APPENDIX - A

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    6/78

    List of Figures

    Pg.No

    Figure 1.1 Block diagram illustrating the process flow. . . . . . . . . . . . . . . . . 2

    Figure 1.2 The electromagnetic spectrum. . . . . . . . . . . . . . . . . . . . . . . . 3

    Figure 1.3 The difference between hyperspectral images and other images. . . . . . 4

    Figure 1.4 The original image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    Figure 1.5 Subset image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    Figure 3.1 Components of a remote sensing system. . . . . . . . . . . . . . . . . . 9

    Figure 3.2 A whiskbroom scanner.. . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    Figure 3.3 A pushbroom scanner. . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    Figure 3.4 The Hyperion sensor assembly . . . . . . . . . . . . . . . . . . . . . . . 13

    Figure 4.1 Spectral profile of water. . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    Figure 4.2 Spectral profile of vegetation. . . . . . . . . . . . . . . . . . . . . . . . 15

    Figure 4.3 Spectral profile of land.. . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    Figure 4.4 Spectral profile of built up areas. . . . . . . . . . . . . . . . . . . . . . . 16

    Figure 5.1 Ground truth. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    Figure 5.2 Vegetation appears as red in the false colour composite image. . . . . . . 19

    Figure 6.1 Radiation entering a sensor. . . . . . . . . . . . . . . . . . . . . . . . . 20

    Figure 6.2 Atmospheric corrected images . . . . . . . . . . . . . . . . . . . . . . . 26

    Figure 6.3 Image prior to atmospheric correction. . . . . . . . . . . . . . . . . . . . 27

    Figure 6.4 Spectral profile of Ulsoor Lake before and after atmospheric correction. . 28

    Figure 7.1 Illustration of the PCA.. . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    i

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    7/78

    Figure 7.2 Pixel vector in PCA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    Figure 7.3 Principal components one and two . . . . . . . . . . . . . . . . . . . . . 33

    Figure 7.4 Principal components three and four.. . . . . . . . . . . . . . . . . . . . 33

    Figure 7.5 Principal components five and six. . . . . . . . . . . . . . . . . . . . . . 34

    Figure 8.1 Training dataset for ground truth image. . . . . . . . . . . . . . . . . . . 38

    Figure 8.2 Illustration of the idea behind SAM. . . . . . . . . . . . . . . . . . . . . 40

    Figure 8.3 Endmember collection spectra generated from ground truth image. . . . . 41

    Figure 8.4 SAM classified output images. . . . . . . . . . . . . . . . . . . . . . . . 41

    Figure 8.5 Mahalanobis Distance classified output images. . . . . . . . . . . . . . . 44

    Figure 8.6 Concept of Minimum Distance Classifier. . . . . . . . . . . . . . . . . . 46

    Figure 8.7 Minimum Distance classified output images. . . . . . . . . . . . . . . . 47

    Figure 8.8 Concept of Maximum Likelihood Classifier.. . . . . . . . . . . . . . . . 49

    Figure 8.9 Maximum Likelihood output images. . . . . . . . . . . . . . . . . . . . 50

    Figure 8.10k-means classification for FLAASH corrected image. . . . . . . . . . . . 53

    Figure 9.1 Graph illustrating the variation of Mahalanobis classification accuracy

    with number of principal components. . . . . . . . . . . . . . . . . . . . . . . . 55

    Figure 9.2 Graph illustrating the variation of Minimum distance classification accu-

    racy with number of principal components. . . . . . . . . . . . . . . . . . . . . . 55

    Figure 9.3 Graph illustrating the variation of SAM accuracy with number of princi-

    pal components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

    Figure 9.4 Graph illustrating the variation of Maximum Likelihood classification

    accuracy with number of principal components. . . . . . . . . . . . . . . . . . . 56

    Figure 9.5 Graph illustrating the variation of Mahalanobis classification accuracy

    with number of principal components for QUAC corrected image. . . . . . . . . 57

    Figure 9.6 Graph illustrating the variation of Minimum Distance classification ac-

    curacy with number of principal components for QUAC corrected image. . . . . . 57

    Figure 9.7 Graph illustrating the variation of Maximum Likelihood classifier accu-

    racy with number of principal components for QUAC corrected image. . . . . . . 58

    ii

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    8/78

    Figure 9.8 Graph illustrating the variation of SAM classification accuracy with num-

    ber of principal components for QUAC corrected image. . . . . . . . . . . . . . . 58

    iii

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    9/78

    List of Tables

    Pg.No

    Table 1.1 The three colours and their wavelengths. . . . . . . . . . . . . . . . . . . 3

    Table 1.2 Differences between multispectral signals and hyperspectral signals.. . . . 4

    Table 1.3 Table listing the details of the subset image.. . . . . . . . . . . . . . . . . 5

    Table 5.1 Removed bands and the reason for their removal. . . . . . . . . . . . . . . 17

    Table 5.2 The various bands used and their uses. . . . . . . . . . . . . . . . . . . . 18

    Table 6.1 Details of various parameters required to perform FLAASH. . . . . . . . . 26

    Table 8.1 SAM Classification efficiency for FLAASH corrected image. . . . . . . . 42

    Table 8.2 SAM Classification efficiency for QUAC corrected image. . . . . . . . . . 42

    Table 8.3 Mahalanobis Distance Classification efficiency for FLAASH corrected

    image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

    Table 8.4 Mahalanobis Distance Classification efficiency for QUAC corrected image. 44

    Table 8.5 Minimum Distance Classification efficiency for FLAASH corrected image. 47

    Table 8.6 Minimum Distance Classification efficiency for QUAC corrected image. . 48

    Table 8.7 Maximum Likelihood Classification efficiency for FLAASH corrected im-

    age. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

    Table 8.8 Maximum Likelihood Classification efficiency for QUAC corrected image. 51

    Table 8.9 k-means Classification efficiency for FLAASH corrected image . . . . . . 53

    Table 9.1 Classification accuracy for FLAASH and QUAC correction modules. . . . 54

    iv

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    10/78

    2012-13 Hyperspectral Signal Processing Approach to Identify Land Cover Pattern

    Abstract

    Land cover assessment plays a vital role in several issues pertaining to policy making, directly

    and indirectly affecting the lives of many people. Accurate information pertaining to land cover

    is therefore of great importance.

    The aim of this project is to to automate the process of classifying land cover into different

    classes using images acquired by a hyperspectral sensor. The four classes of interest are: water,

    vegetation, barren land, and built up areas.

    This is a Level 1 classification, meaning that the main classes are not further subclassified

    into various other classes.The hyperspectral images are passively sensed using optical signals.

    The sensed signals are processed to remove the effects of atmosphere.

    Atmospheric correction is performed using two techniques, namely, QUAC(Quick Atmo-

    spheric Correction) and FLAASH(Fast Line of Sight Analysis of Spectral Hypercubes). Di-

    mensionality reduction is done through Principle Components Analysis(PCA).

    Automatic classification of the image into the target classes was done through the use of

    supervised as well as unsupervised algorithms. Some of the supervised algorithms used were:

    Mahalanobis Distance Classifier, Spectral Angle Mapper, etc. The unsupervised algorithm

    used was thek-means algorithm.

    The results indicate that supervised methods are the best classifiers and that among the

    atmospheric correction techniques, the FLAASH correction module is the best on.

    Department of Electrical and Electronics Engineering, AIeMS v

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    11/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    Chapter 1

    Introduction

    Land cover assessment and analysis plays a very vital role in framing various national policies

    of governments across the globe. Information obtained from land cover analysis gives informa-

    tion pertaining to the environment, agricultural patterns, estimation of forest cover, preparation

    of digital maps and are also used for military purposes.

    Policies and government decisions regarding various issues pertaining to the aforemen-

    tioned domains are based on the information available at hand. Therefore, accurate realisation

    of land cover is of vital importance, as any wrong or inaccurate information results in bad poli-

    cies and decisions jeopardising the security of the nation and endangering the livelihoods of

    millions.

    Land cover refers to what is actually present on the ground, describing the physical state of

    the earths surface and immediate surface in terms of natural environment(water, vegetation,

    rocks, etc) and man-made objects.

    Land cover shouldnt be confused with the term land use, even though they are used inter-

    changeably. Land cover provides information as to what features are present on the ground.

    For example, land cover

    Prior to automatic classification, land cover and land use assessment was done manually

    by analysing aerial photographs of the area to be studied. As technology increased, automatic

    Department of Electrical and Electronics Engineering, AIeMS 1

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    12/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    classification techniques to classify the image, the need for automatic classification techniques

    was felt.

    There are several advantages of using automatic classification techniques. Previously, mul-

    tispectral sensors were being used to acquire images. Multispectral sensors contain on an

    average ten bands of the same area. However, this is not the case with hyperspectral im-

    ages. Hyperspectral images contain hundreds of bands of the same area, each band separated

    by a very small wavelength. Manually analysing hundreds of bands is an impossible task.

    Moreover, humans are prone to errors and bias, which negatively influences the accuracy of

    classification. Therefore, the only viable solution left is automatic classification.

    1.1 Methodology

    Prior to classification, various other procedures have to be performed, these are illustrated in

    the form of a block diagram in Fig 1.1The steps written in logical order are as follows:

    1. Data Preparation

    2. Pre-Processing and Atmospheric Correction

    3. Dimensionality Reduction

    4. Classification

    5. Validation using Ground Truth

    Figure 1.1: Block diagram illustrating the process flow.

    Department of Electrical and Electronics Engineering, AIeMS 2

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    13/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    The flow as illustrated in the block diagram, was arrived at by referring to the literature, and

    other requirements which had to suit the specifications of the dataset we possessed.[ 1].

    1.2 About hyperspectral signals

    Normal images are acquired in a very small portion of the electromagnetic spectrum, this being

    the visible region. The visible region comprises of wavelengths corresponding to three colours,

    red, blue and green. The wavelengths corresponding to the three colours are given in Table 1.1

    Colour Wavelength

    Red 700 nm

    Blue 400nm

    Green 500 nm

    Table 1.1: The three colours and their wavelengths.

    The area occupied by the visible region is a very small portion of the electromagnetic spec-

    trum, as can be seen from Fig 1.1

    Figure 1.2: The electromagnetic spectrum.

    Hyperspectral signals acquire information, in this case, images in the visible region as well

    as the infrared region. The main advantage of using hyperspectral signals is that the difference

    between each wavelength(also referred to as spectral resolution) is very small, approximately

    10 nm. This fine spectral resolution results in a lot of information being acquired.A better way

    to visualise the differences between hyperspectral images and other forms of remote sensing is

    illustrated in Fig 1.3on the next page.

    Department of Electrical and Electronics Engineering, AIeMS 3

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    14/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    Figure 1.3: The difference between hyperspectral images and other images.

    Hyperspectral imaging is also referred to as imaging spectroscopy, due to the fact that differ-

    ent materials can be reasonably identified using their spectrum. Mathematically, the spectrum

    of a continuous signal,x(t)can be written as [2]:

    X() = +

    x(t)etdt (1.1)

    Eq 1.1 is also known as the Fourier transform of the signal x(t).

    Multispectral Hyperspectral

    Discrete coverage of the spectrum Continuous coverage of the

    spectrum

    Spectral resolution is around 100 nm. Spectral resolution around

    10-40 nm

    Low information content compared to hyperspectral signals Greater information content

    Table 1.2: Differences between multispectral signals and hyperspectral signals.

    1.3 Geographical area under study

    The area of study is an image of Bangalore acquired in the year 2002, by the Hyperion sensor

    onboard the EO-1 satellite. The image ranges from Nandi Hills in the north to the outskirts of

    Kanakapura in the south. The image has spatial dimensions 911x3191, and a spectral dimen-

    sion of 196 bands in the image.

    A subset of the image was chosen, primarily due to the lack of ground truth for the outlying

    areas of the image, and any subsequent ground truth prepared would not have reflected the out-

    lying areas as of 2002. The subset of the image varies from the outskirts of Yelahanka, in North

    Bangalore to the areas adjoining Banneraghata National Park in South Bangalore. Referring

    Department of Electrical and Electronics Engineering, AIeMS 4

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    15/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    to Google Earth, we observed no significant changes in these areas, and hence these particular

    areas were chosen to form the subset image.

    Details of the subset image are given in the form of a table (Table 1.3) on the next page,

    along with the original image (Fig 1.4) and the subset image(Fig 1.5). The table lists various

    parameters, such as latitude, longitude, the spatial and spectral dimensions. The black portion

    in Fig 1.4 is indicative of areas which do not fall under the sensors view.

    Parameter Details

    Latitude of UL corner 13.07721667 N

    Longitude of UL corner 77.57472500 E

    Latitude of LR corner 12.91586889 N

    Longitude of LR corner 77.61036142 E

    Rows 750

    Columns 500

    Spatial resolution 30mx30m

    Table 1.3: Table listing the details of the subset image.

    Figure 1.4: The original image.

    Department of Electrical and Electronics Engineering, AIeMS 5

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    16/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    Figure 1.5: Subset image

    1.4 Outline of the Report

    The rest of the report is arranged as follows:

    Chapter 2 describes the previous attempts at land cover classification, along with a section

    on other applications of hyperspectral signals.

    Chapter 3 describes in detail the methods of image acquisition.

    Spectral profiles are the pillars of imaging spectroscopy, and we talk about this in Chapter

    4.

    Chapter 5 describes the way the acquired data is prepared for pre-processing.

    In Chapter 6 the techniques of atmospheric correction, the need for atmospheric correc-

    tion, and the effects of atmospheric correction on the image are discussed.

    Chapter 7 talks about dimensionality reduction in general, and Principle Components

    Analysis in particular.

    Chapter 8 deals with the various algorithms used for classification, the mathematical back-

    ground of the algorithms used to classify the image.

    The results obtained for various classification algorithms are summarised and discussed

    in Chapter 9.

    Chapter 10 indicates the road ahead for hyperspectral imaging research.

    Department of Electrical and Electronics Engineering, AIeMS 6

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    17/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    Chapter 2

    Literature Survey

    In this chapter, we take a look at previous attempts at land cover classification using satellite

    images and later on in the chapter, look at the many uses of hyperspectral images and gauge

    the broad spectrum of areas across which hyperspectral signals are being used.

    Remotely sensed images, be it multispectral images or hyperspectral images are nowadays

    being used for many end uses. Of these many uses, land cover classification is one.

    Uttam Kumar,et.al., have given a clear picture of the detailed history behind land cover clas-

    sification using satellite images in [3].

    Land cover classification using satellite images has a long history. The first use of satellite

    images for land use classification was started in the late 1970s and early 1980s as part of the

    National Mapping Program, by the United States Geological Survey, based on images acquired

    by NASA. The images were acquired using aerial photography and classification was carried

    out manually by analysts[3].

    Later on, in 1991, the entire country of China has been mapped to produce a digital map

    containing 20 different land use /land cover classes. This was done using field surveys, satellite

    and aerial images to understand land cover change.[4]

    More attempts at land use/land cover classification include the National Land Cover Database

    Department of Electrical and Electronics Engineering, AIeMS 7

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    18/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    provides a standardised land cover database for South Africa, Swaziland and Lesotho using

    Landsat images acquired in 1994-95, the Global Vegetation Monitoring unit of the JRC,ISPRA

    in Italy to generate a global land cover/land use mapper in the year 2000[3].

    In India, land use and land cover study using remote sensing has been initiated by the Indian

    Space Research Organisations remote sensing satellite; Resourcesat and the National Remote

    Sensing Agency, Department of Space. The major classes of interest were agricultural areas,

    surface water bodies, waste lands, forests,etc. This was carried out on a national level using

    multi-temporal IRS(Indian Remote Sensing), AWiFS(Advanced Wide Field Sensor) datasets

    to provide on an annual basis, net sown areas for different cropping seasons and an integrated

    land cover map[4].Senthilnath et.al, have used multi-spectral satellite images[5] to determine

    land cover over the city of Bangalore in the year 2010. Uttam Kumar,et.al,[3] have used hy-

    perspectral data acquired by MODIS satellite to assess land use pattern over the district of

    Chikkaballapur in the state of Karnataka, India.

    Other uses of hyperspectral signals include assessing the impact of climate change[6],detecting

    the type of crops and their growth stage[7],[8], medical imaging[9],face recognition[10]. The

    applications are not limited to the aforementioned areas, it can be used in many other areas,

    limited only by our imagination.

    To the best of our knowledge, assessing land cover using hyperspectral data acquired by the

    Hyperion sensor hasnt been done till now.

    Department of Electrical and Electronics Engineering, AIeMS 8

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    19/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    Chapter 3

    Image Acquisition

    The sensor used to acquire the image is the Hyperion sensor onboard the Earth Observing-

    1(EO-1) satellite, launched by NASA. The Hyperion sensor is a hyperspectral sensor.

    3.1 Sensor Models

    The sensor used in any satellite based or airborne imager maybe modelled as depicted in the

    figure below. It has been adapted from[11].

    Figure 3.1: Components of a remote sensing system.

    3.1.1 Scanner

    There are basically two types of scanning methods: a)Along track scanning and b)Across track

    scanning.

    Department of Electrical and Electronics Engineering, AIeMS 9

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    20/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    Across track scanning

    InAcross track scanninguses rotating and oscillating mirror which scans the entire terrain that

    are at right angles to flight line. Successive scan lines are covered as aircraft moves forward,

    yielding a series of contiguous strips of a 2-D image. These type of sensors are also called

    whiskbroom scanners(Fig 3.2).

    Figure 3.2: A whiskbroom scanner.

    At any instant, scanner detects energy within Instantaneous Field Of View (IFOV), which

    is normally expressed as cone angle within which incident energy is focussed on detector.is

    determined by instruments optical system and size of detectors.

    D= h (3.1)

    where, D = Diameter of circular ground viewed

    h = Flying height above terrain

    = IFOV in radians

    Along track scanning

    InAlong track scanning, the sensor unit consists of a linear array of detectors, which consists

    of Charge Coupled Devices(CCDs). The direction of scanning is the same as the direction of

    motion of the imaging platform, hence the name,along track scanning. These type of sensors

    Department of Electrical and Electronics Engineering, AIeMS 10

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    21/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    are also called pushbroom scanners(Fig 3.3).

    Figure 3.3: A pushbroom scanner.

    The main advantages of using these kinds of sensors is that since they do not contain any

    moving parts, their service life is high, and they have a longer dwell time, i.e., the time spent

    on each scan line. A longer dwell time implies that the signal strength recorded is high. Due

    to these advantages over across track scanning, the Hyperion sensor uses along track scanning

    method[12].

    3.1.2 Imaging Optics

    Optics usually describes the behavior of visible, ultraviolet, and infrared light used in imaging.

    Here optical system is used to project information collected in sensors onto detectors. Before

    optical signals are projected onto detectors they are made to split into there constituent regions

    of the electromagnetic spectrum with help of prisms and dichroic grating. The dichroic grating

    splits optical signal into thermal and non thermal regions. The prism is then used to split the

    non thermal optical signals into visible, ultra-violet(UV) and near Infrared regions.

    3.1.3 Detectors

    Due to sensor platform and scan mirror velocity, various sample timings for bands and pixels,

    the need to physically separate different spectral bands, and limited availability on focal plane,

    Department of Electrical and Electronics Engineering, AIeMS 11

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    22/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    the detector patterns are often in different pattern as per requirement. Here, each detector in-

    tegrates the energy that strikes surface(irradiance) to form measurement at each pixel. Then

    integrated irradiance at each pixel is converted into electrical signal and quantized as integer

    value, called as Digital Number(DN). A finite no. of bits, Q, is used to code the continuous

    data measurements as binary numbers. The no. of Discrete Numbers is given by,

    NDN= 2Q (3.2)

    and DN can be any integer in the the range,

    DNRANGE= [0, 2Q 1] (3.3)

    The larger the value of Q, the more closely the quantized data approximates the original con-

    tinuous signal generated by detectors, and the higher will be the radiometric resolution of that

    sensor[11].

    3.2 Mathematical Model of the Instrument

    No instrument can measure the signal it has sensed with 100% accuracy, because the signal is

    always varying as a function of some parameter. This parameter, in the case of remote sensing,

    is usually time, or wavelength, space. So, to obtain the output signal, the instrument must

    integrate the signal over a non-zero parameter value. This can be written as[11]:

    o(z0) =

    W

    i()r(z0) (3.4)

    wherei is the input signal r(z0 ) is the instrument response, inverted and shifted by

    z0o(z0)is the output of the instrument at z=z0W being the range over which the integral is

    significant, W also depends on the parameter of integration.

    What the above equation means is that the output signal of the instrument is the convolu-

    tion of the input signal,(the signal being sensed) and the instrument response. Written more

    concisely, Eq 3.1 can be rewritten as

    o(z0) =i(z)r(z) (3.5)

    Department of Electrical and Electronics Engineering, AIeMS 12

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    23/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    3.3 Hyperion Sensor

    As discussed above, the Hyperion sensor is a pushbroom sensor. Each pushbroom image frame

    captures an area of approximately 30m along track, by 7.7km cross track[ 12]. The Hyperion

    optics is a three mirror astigmate design with a 12-cm primary aperture and a f-number of

    11. The sensor acquires the signals reflected of the image at an altitude of 705-km above the

    surface of the earth. Hyperion also has two onboard spectrometers, a VNIR spectrometer and

    a SWIR spectrometer. The spectrometers are temperature controlled at 2932K, and 283K for

    the SWIR and VNIR spectrometers respectively.

    Figure 3.4: The Hyperion sensor assembly

    Department of Electrical and Electronics Engineering, AIeMS 13

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    24/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    Chapter 4

    Spectral Profiles

    Spectral profiles are a graph of reflectance versus wavelength. It indicates at wavelengths, a

    material has maximum reflection, or minimum reflection.

    4.1 Water

    The variation between the spectral profiles of the four classes is explained as follows. Water

    acts as an absorber in the IR(Infra-Red) region. The infra red region varies from 700 nm to

    1 mm. Maximum water absorption occurs at 1450 nm, which is the SWIR region of the IR

    region, as can be seen from the Fig 4.1.

    Figure 4.1: Spectral profile of water.

    4.2 Vegetation

    The spectral profile of vegetation is a function of the chlorophyll content present. In this case,

    a minor peak occurs at the wavelength corresponding to the colour green. This indicates that

    Department of Electrical and Electronics Engineering, AIeMS 14

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    25/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    the vegetation is photosynthetically active. The troughs at the red and blue wavelengths occur

    because the wavelengths in these regions are absorbed to satisfy the energy requirement for

    photosynthesis. The reflectance remains very high in the NIR region, because of interaction

    between the leaf tissues and electromagnetic radiation.

    A dip can be seen in the SWIR region, because water absorption, due to water present in the

    leaves and stem, predominates at these wavelengths.

    Figure 4.2: Spectral profile of vegetation.

    4.3 Land

    The reflectivity of barren land depends on the soil type, moisture content, soil texture, and or-

    ganic matter present in the soil. From Fig 4.3, it can be seen that there is maximum reflectivity

    in the NIR region, and a very low reflectivity in the SWIR region. This indicates that the land

    under study has a very high moisture content.

    Figure 4.3: Spectral profile of land.

    Department of Electrical and Electronics Engineering, AIeMS 15

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    26/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    4.4 Urban areas

    Urban areas and land have similar spectral properties. The reflectivity of built up areas, is again

    dependent on various factors such as the type of material used,the moisture content present in

    the materials, etc. Here, the spectral profile of urban areas indicates a high moisture content in

    the materials.

    Figure 4.4: Spectral profile of built up areas.

    Department of Electrical and Electronics Engineering, AIeMS 16

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    27/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    Chapter 5

    Data Preparation

    For Level 1 radiometric image, out of the 224 bands used to acquire the image, only 198 bands

    have been calibrated. Calibrated channels are 8-57 for VNIR, and 77-224 for SWIR. All the

    channels are not calibrated because of detectors low response and hence bands that are not

    calibrated are set to zero. The remaining 26 bands do not contain any useful information. The

    bands which are removed and the reasons for their removal are listed in the form of a table

    below.

    Removed bands Reason for removal

    Bands 1-7 No useful information

    Bands 58-76 No useful information

    Bands 77-78 Spectral overlap

    Bands 56-57 Spectral overlapBands 225-242 No useful information

    Table 5.1: Removed bands and the reason for their removal.

    The digital values of Level 1 images are 16 bit radiances and are stored as 16 bit signed

    integer. After all the unwanted bands are removed, the remaining 196 bands are stacked on

    top of another. The stacked image is stored in the Band Interleaved Format(BIL), using ENVI

    4.7. The wavelengths and the Full Wavelength at Half Maximum(FWHM) for each band are

    specified using data specified in the Hyperion Data Format Control Book[13].

    The human eye can detect only three colours and their various combinations, hence the im-

    age is visualised in three bands.

    Department of Electrical and Electronics Engineering, AIeMS 17

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    28/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    Band Numbers Use

    Band 29(Red),Band 23(Green),Band 16(Blue) True Colour Composite

    Band 50,Band 23,Band 16 False Colour Composite(Vegetation

    appears as Red)

    Table 5.2: The various bands used and their uses.

    A subset of the image was taken. The primary reason for this was due to the fact that there

    was no ground truth available for the entire image. Any subsequent ground truth prepared

    wouldnt reflect any changes in the outskirts of the city. Another reasons for using a subset of

    the image was due to the fact that analysis of the image was computationally not feasible.

    5.1 Ground Truth

    Ground truth refers to what is actually present on the ground. The ground truth is used as a

    reference to validate the classifying accuracy of different algorithms used.

    The ground truth was prepared by dividing the subset image into 40 clusters using the k-

    means technique. Using Google Earth images of Bangalore acquired in 2002, and False Colour

    Composites of the subset image, the 40 classes were merged and finally four different classes

    corresponding to water, vegetation, built up area and land.

    Figure 5.1: Ground truth.

    Department of Electrical and Electronics Engineering, AIeMS 18

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    29/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    The false colour composite was used to find the areas containing vegetation, since vegetation

    appears as red in the false colour composite image.The false colour composite image in which

    vegetation appears as red is shown below.

    Figure 5.2: Vegetation appears as red in the false colour composite image.

    Department of Electrical and Electronics Engineering, AIeMS 19

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    30/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    Chapter 6

    Atmospheric correction

    The process, which transforms the data from spectral radiance to spectral reflectance, is known

    as Atmospheric Correction, Compensation or Removal[14]. Hyperion images are rich source

    of information contained in hundreds of narrow contiguous spectral bands. There are number

    of atmospheric agents which contaminate this information contained in various bands. There-

    fore to get complete advantage of Hyperion data its required to remove the effects of such

    atmospherical agents on earth observation data.

    6.1 Need for Atmospheric Correction

    Earths atmosphere is not clear or plain, but it contains a lot of dust particles, aerosols, water

    vapour molecules, carbon particles, etc due to which reflectance path and amount of spectra

    between source i.e sun and pixel under observation and between pixel under observation and

    sensors is altered. Hence actual parameters are not acquired and hence we have to correct these

    defects in order to get original information. This is illustrated in Fig 6.1.

    Figure 6.1: Radiation entering a sensor.

    Department of Electrical and Electronics Engineering, AIeMS 20

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    31/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    where,S1 is radiance to be observed,

    S2 is radiance from atmospheric dispersion,

    S3 is Path Radiance.

    It is a critical pre-processing step since most approaches have been implemented using spec-

    tral library or field spectra[14]. If atmospheric correction is not performed, then there is marked

    difference between observed spectral irradiance and spectral library or field spectra. These dif-

    ferences will negatively influence accuracy to which classification has been carried out based

    on field spectra or independent spectral library.

    The atmosphere effects the brightness or radiance, recorded over any point on the ground in

    two almost contradictory ways, when sensor records reflected solar energy. First it attenuates

    energy illuminating a ground object at particular wavelengths, thus decreasing radiance that can

    be measured. Second, the atmosphere acts as a reflector itself, adding a scattered, extraneous

    path radiance to signal detected by sensor which is unrelated to properties os surface. By

    expressing these two atmospheric effects mathematically, the total radiance recorded by the

    sensor may be related to the reflectance of the ground object and the incoming radiation or

    irradiance using following equation:

    Ltot=ET

    +L (6.1)

    where,

    Ltot= total spectral radiance measured by sensor

    = reflectance of object

    E = irradiance on object, incoming energy

    T = transmission of atmosphere

    L= path radiance, from atmosphere and from the object

    All these factors depend on wavelength. The irradiance (E) stems from two sources: di-

    rectly reflected sunlight and diffuse skylight (sunlight scattered by the atmosphere). The rela-

    tive dominance of sunlight versus skylight in a given image is strongly dependent on weather

    Department of Electrical and Electronics Engineering, AIeMS 21

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    32/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    conditions. The irradiance varies with the seasonal changes in solar elevation angle and the

    changing distance between the earth and the sun[3].

    The magnitude of absorption and scattering varies from place to place and time to time

    depending on the concentrations and particle sizes of the various atmospheric constituents.

    The end result is the raw radiance values observed by a hyperspectral sensor that cannot be

    directly compared to laboratory spectra or remotely sensed hyperspectral imagery acquired at

    other times or places. Before such comparisons can be performed, an atmospheric correction

    process must be used to compensate for the transient effects of atmospheric absorption and

    scattering.

    6.1.1 Reflectance v/s Radiance

    In all the atmospheric correction techniques which are going to be described in this report, ra-

    diance is converted to reflectance. Information, when first acquired, is in the form of radiance.

    Radiance, also known as spectral radiance, is a measure of the quantity of radiation, that

    passes through, or is emitted from the surface and falls within a given solid angle in a specified

    direction. Radiance has units ofW.sr1m3. The radiation of an object can be affected by the

    radiance of other objects in its surroundings.Energy transfer occurs between different objects,and thus the signal acquired does not truly reflect the object under observation. Fig?? gives a

    pictorial representation of this explanation.

    Reflectance,is a measure of the ability of a surface to reflect light or other electromagnetic

    radiation, equal to the ratio of the reflected flux to the incident flux. Reflectance values, give

    a true picture of the object under study, because reflectance value of an object is unique and

    thus, this reflectance value acquired across the entire spectral range of the sensor, gives us the

    spectral signature of the object under study.

    6.2 Atmospheric Correction Approaches

    Atmospheric correction may be applied by collecting information from scene i.e Scene Based

    Empirical Approaches or by modelling radiation transmission through atmosphere i.e Radia-

    Department of Electrical and Electronics Engineering, AIeMS 22

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    33/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    tion Transport Model based Approach.

    6.2.1 Scene Based Empirical Approach

    These approaches are based on radiance values present in image i.e scene, hence name Scene

    Based Empirical Approaches. IAR (Internal Average Reflectance) and ELM (Empirical Line

    Method) and QUAC (Quick Atmospheric Correction) are some of major examples of Scene

    Based Empirical Approaches.

    6.2.2 Radiation Transport Model Approach

    The Scene Based Empirical Approach are not generally producing good results as the linear-

    ity assumption, which presumes uniform atmospheric transmission, scattering and adjacency

    effects throughout atmosphere, which may not be accurate. But Radiation Transport Model

    tries to understand and remove effects of major atmospheric process with radiation such as

    absorption and scattering. Very effective and latest models are MODTRAN (MODerate reso-

    lution atmospheric TRANsmission) and FLAASH (Fast Line of Sight Atmospheric Analysis

    of Spectral Hypercubes.

    QUAC and FLAASH atmospheric correction modules were performed on the stacked im-

    age. These two methods are chosen because they are effective for atmospheric correction

    of multispectral and hyperspectral data. The two correction modules are implemented using

    ENVI 4.7.

    6.3 QUAC

    QUAC i.e Quick Atmospheric Correction is a scene based empirical approach. It determines

    the atmospheric compensation parameters directly from the information contained within the

    scene using the observed pixel spectra, requiring only approximate specification of sensor band

    locations (i.e., central wavelengths) and their radiometric calibration and no additional informa-

    Department of Electrical and Electronics Engineering, AIeMS 23

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    34/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    tion is required[15]. The approach is based on the empirical finding that the spectral standard

    deviation of a collection of diverse material spectra, such as the constituent material spectra in

    a scene, is essentially spectrally flat.

    It allows the retrieval of reasonably accurate reflectance spectra even when the sensor does

    not have a proper radiometric or wavelength calibration, or when the solar illumination inten-

    sity is unknown. The computational speed of the atmospheric correction method is signifi-

    cantly faster than for the first-principles methods, it is significantly faster than physics based

    approaches, making it potentially suitable for real-time applications. The aerosol optical depth

    retrieval method, unlike most prior methods, does not require the presence of dark pixels.

    QUAC creates an image of retrieved surface reflectance, scaled into two byte signed integers

    using reflectance scale factors of 10,000.

    6.4 FLAASH

    Fast Line of Sight Atmospheric Analysis of Spectral Hypercubes, dubbed as FLAASH is an

    atmospheric correction technique developed by the Air Force Research Laboratory, Space Ve-

    hicles Directorate (AFRL/VS), U.S. Air Force, to support the analysis of VNIR to SWIR (from

    0.4m to 3m ) hyperspectral and multispectral imaging sensors.

    The main objectives are to provide accurate, physics-based derivation of atmospheric prop-

    erties such as surface pressure, water vapor column, aerosol and cloud overburdens, to incor-

    porate those same quantities into a correction matrix, and, finally, to invert radiance-at-detector

    measurements into reflectance-at-surface values. Atmospheric correction serves a critical role

    in the processing of remotely sensed image data, particularly with respect to identification of

    pixel content.

    Efficient and accurate realization of images in units of reflectance, rather than radiance, is

    essential for building consistency into the development, maintenance, distribution, and analy-

    sis of any library of such images, acquired under a variety of measurement conditions. Unlike

    Department of Electrical and Electronics Engineering, AIeMS 24

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    35/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    other atmosphere correction algorithms that interpolate radiation transfer properties from a pre-

    calculated database of modelling results, FLAASH incorporates the MODTRAN-4 radiation

    transfer code. User is provided a option of choosing any one of the standard MODTRAN model

    of atmospheres and aerosols types to represent the scene and a unique MODTRAN solution is

    computed for each image.

    FLAASH processes radiance images with spectral coverage from the mid-IR through UV

    wavelengths where thermal emission can be neglected. For this situation the spectral radiance

    L* at sensor pixel may be parameterized as

    L

    =

    A

    1eS+

    Be1eS+L

    a (6.2)

    where,

    = pixel surface reflectance

    e= average surface reflectance for surrounding region

    S = Spherical albedo of the atmosphere (capturing backscattered surface reflected photons)

    La= Radiance backscattered by atmosphere without reaching the surface

    A,B = Surface independent co-efficients that vary with atmosphere and geometric conditions

    Various options corresponding to different parameters such as the sensor type,the altitude

    of the sensor acquired from the ground, pixel size, the type of environment the image was

    acquired in, can be incorporated in FLAASH. All these parameters and others essential to

    carry out FLAASH for the image under study is listed in the form of a table (Table 6.1) on the

    next page.

    Spectral polishing was carried out since it it reduces the noise of the obtained spectra. This

    was done using the average of the neighbouring multiple bands. For Hyperion data, a value of

    9 was chosen for spectral polishing[16].

    Department of Electrical and Electronics Engineering, AIeMS 25

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    36/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    Parameter Details

    Sensor Type Hyperion

    Sensor Altitude 705.00 km

    Pixel Size 30.00 m

    Latitude of scene center 12.41810036 N

    Longitude of scene center 77.57769775

    EFlight date 22nd March,2002Atmospheric Model Tropical

    Aerosol Model Urban

    Zenith Angle 148.655303

    Azimuth Angle 110.924797

    Table 6.1: Details of various parameters required to perform FLAASH.

    6.5 Comparison of the various atmospheric techniques

    A comparison between the atmospheric techniques used to correct the images in our project is

    done in this section. Fig6.2 illustrates the subset image after performing atmospheric correc-

    tion, and Fig6.3prior to atmospheric correction.

    (a) Image after QUAC Correction (b) Image after FLAASH Correction

    Figure 6.2: Atmospheric corrected images

    It is evident from the three images, that interpreting the effects of atmospheric correction by

    visual inspection of the images is not a easy task. Using the spectral profile one can make out

    Department of Electrical and Electronics Engineering, AIeMS 26

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    37/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    Figure 6.3: Image prior to atmospheric correction.

    changes in the images before and after atmospheric correction, refer to Fig6.4.

    6.5.1 Explanation of the water spectral profile

    Prior to atmospheric correction, the reflectance of water is very high in the VNIR and SWIR re-

    gions. As mentioned in the previous chapter, this is not the case. After atmospheric correction,

    the reflectance of water has reduced in these bands. This might seem counter-intuitive, but this

    unexpected result can be accounted for. Pure water,viz, water that is without any impurities, or

    almost negligible amount of impurities has a low reflectance in the IR region. But, the water

    body in our area of study, is not pure. It contains a large amount of organic matter, making it

    impure, contributing to higher values of reflectance in the IR region. After atmospheric cor-

    rection, due to the impurities being removed, the reflectance reduces to a minimum in the IR

    region, as expected, for the spectral profile of water.

    Department of Electrical and Electronics Engineering, AIeMS 27

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    38/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    Figure 6.4: Spectral profile of Ulsoor Lake before and after atmospheric correction.

    Another thing to be noted is that the spectral profile after QUAC is not a continuous curve.

    It is not defined for certain wavelengths. This is rather expected, since QUAC is a scene inde-

    pendent, empirical method of atmospheric correction, and hence these anomalies are observed.

    Department of Electrical and Electronics Engineering, AIeMS 28

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    39/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    Chapter 7

    Dimensionality Reduction

    Hyperspectral images, as described earlier, are made up of hundreds of narrow contiguous

    bands, with a very fine spectral resolution. Due to the very fine spectral resolution, two or

    more bands may contain very much the same data(high correlation) and thus, this informa-

    tion is redundant.Redundant information occupies extra storage space, it also means that when

    analysing or classifying, the same information has to be looked at again,reducing the compu-

    tational speed.This poses significant challenges in terms of their analysis. Due to their high

    dimensional space,they suffer from the curse of dimensionality and another challenge to be

    factored in, is the Hughes phenomenon[17]. In high dimensional datasets, an enormous num-

    ber of training datasets are required to ensure that multiple samples are covered. This is known

    as the Hughes phenomenon.

    To overcome these challenges, the dimension of the hyperspectral images are reduced to an

    acceptable number. Acceptable, here is defined by the classifier accuracy.

    There are many methods of dimensionality reduction. Examples include Principle Com-

    ponents Analysis(PCA), Independent Components Analysis (ICA), Vertex Component Analy-

    sis(VCA),etc.

    ICA is based on the fact that a signal(in any number of dimensions) is composed of statis-

    tically independent signals. But, a major drawback of ICA is that it doesnt work on gaussian

    datasets.Since our dataset is very big, we work under the assumption that our dataset obeys the

    Department of Electrical and Electronics Engineering, AIeMS 29

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    40/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    Central Limit Theorem, and hence is gaussian in its behaviour.

    VCA assumes that the spectrally pure endmembers constituting the hyperspectral image lie

    at the vertices of a n-d polygon in n-dimensional space.

    PCA is chosen, because in terms of computation speed, it is the fastest[18].

    7.1 Principle Components Analysis

    The method of principal components, also known as the Karhunen-Loeve Transform, or the

    Hotelling Transform is a data-analytic technique that obtains linear transformations of a group

    of correlated variables such that certain optimal conditions are satisfied. The most important

    of these conditions is that the transformed variables are uncorrelated[19].

    The number of principle components is less than or equal to the number of original com-

    ponents. If a multivariate dataset, visualised as a set of coordinates in a high-dimensional data

    space, PCA provides a lower-dimensional picture. This is done using only the first few princi-

    pal components.

    Figure 7.1: Illustration of the PCA.

    Department of Electrical and Electronics Engineering, AIeMS 30

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    41/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    7.1.1 Mathematical Description

    Mathematically, PCA is defined as an orthogonal linear transformation, that transforms the

    data to a new coordinate system, refer Fig7.1.This orthogonal linear transformation, is done in

    such a manner such that the greatest variance by a projection of the data lies along the first co-

    ordinate(the first principal component), the second greatest variance on the second coordinate

    and so on.

    For hyperspectral images, the PCA is illustrated below. The figure has been adapted from[20]

    Figure 7.2: Pixel vector in PCA.

    The PCA is based on the mathematical principle of eigenvalue decomposition of the covari-

    ance matrix of the hyperspectral image to be analysed. First, an image pixel vector is calculated

    as, follows (refer Fig 7.2).The following description has been adapted from[20]:

    xi = [x1, x2, . . . , xn]iT,fori= 1, 2, . . . , M (7.1)

    wherex1, x2, . . . , xn , represent the pixel value of the hyperspectral image at a pixel location

    andTdenotes the transpose,

    andMrepresents the dimensions of the hyperspectral image(mxn),

    m being the number of rows, and n being the number of columns,

    N represents the number of bands in the hyperspectral image.

    The mean vectorm of all the image vectors [x1, x2, . . . , xn]Ti is calculated for i= 1, 2, . . . , M .

    The mean vectormis calculated as follows:

    Department of Electrical and Electronics Engineering, AIeMS 31

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    42/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    m = 1

    M[x1, x2, . . . , xn]

    Ti (7.2)

    The covariance matrix ofx is defined as:

    Cov(x)= E{(xE(x))}(xE(x))T (7.3)

    Eq 7.3can be approximated as follows:

    CX= 1

    M

    Mi=1

    (xim)(xim)T (7.4)

    whereE is the expectation operator

    The covariance matrix CXis eigen decomposed to give the following matrices:

    CX= ADAT (7.5)

    whereA is an orthonormal matrix containing the eigenvectors of the covariance matrix CX,

    and D is a diagonal matrix containing the eigenvalues of the covariance matrix CX,

    i.e.,D = diag(1, 2, . . . , N)

    An orthonormal matrix is a matrix whose transpose is its inverse, i.e, in the case ofA,

    AT = A1. (7.6)

    A = (a1,a2, . . . ,aN)

    The linear transformation which gives the principal components is described as:

    yi = ATxi (7.7)

    yi=

    y1

    y2...

    yK...

    yN

    =

    a11 a12 . . . a1K . . . a1N

    a21 a22 . . . a2K . . . a2N...

    ... ...

    ... ...

    ...

    aK1 aK2 . . . aKK . . . aKN

    x1

    x2...

    xK...

    xN

    i

    (7.8)

    Department of Electrical and Electronics Engineering, AIeMS 32

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    43/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    If only the firstKeigenvectors are chosen, the first Kprincipal components of the image

    are obtained.UsuallyK N, and hence this leads to reduction in the dimensionality of the

    image.

    Fig7.3, Fig7.4and Fig7.5shows the first six principal components of the study image.

    As can be seen from Fig 7.3, Fig7.4 and Fig7.5, the first three principal components con-

    tain the majority of the information. The information content reduces to a bare minimum in the

    remaining two bands. It is evident that, theSNR(Signal to Noise Ratio) progressively reduces,

    as we look at principal components away from the first three components.

    (a) First principal component (b) Second principal component

    Figure 7.3: Principal components one and two

    (a) Third principal component (b) Fourth Principal component

    Figure 7.4: Principal components three and four.

    Department of Electrical and Electronics Engineering, AIeMS 33

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    44/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    (a) Fifth principal component (b) Sixth principal component

    Figure 7.5: Principal components five and six.

    Department of Electrical and Electronics Engineering, AIeMS 34

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    45/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    Chapter 8

    Classification Algorithms

    Classification includes a broad range of decision-theoretic approaches to the identification of

    images. All classification algorithms are based on the fact that the image under consideration

    depicts one or more features (e.g. spectral regions in the case of remote sensing) and that each

    of these features belongs to one of several distinct and exclusive classes. These features are an-

    alyzed as set of quantifiable properties and these properties may be variously be categorical(e.g

    A, B, AB, or O as in case of blood group), ordinal (e.g. large, small ), integer

    valued (e.g. like occurrences of word in a mail) or real-valued.

    8.1 Methodology of classification

    Classification can be primarily of two types, supervised classification and unsupervised clas-

    sification. Supervised classification is a learning method wherein a training set of correctly

    identified observations are available. An unsupervised procedure involves grouping data into

    categories based on some measure of inherent similarity and this procedure is also called clus-

    tering.

    There are three steps involved in any classification process[21]. They are training, classifi-

    cation and accuracy assessment. Training sites are needed for supervised classification and in

    this study the training areas were taken from ground truth image. The satellite image is then

    classified using four supervised classifiers and an one unsupervised classifiers. Two measure-

    ments of accuracy assessment were carried out for each of the classifier methods mentioned

    earlier, such as overall accuracy and error matrix or confusion matrix. Accuracy assessment

    Department of Electrical and Electronics Engineering, AIeMS 35

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    46/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    was carried out to compute the probability of error for the classified map. The error matrix

    is the term used to describe the measure of accuracy between the images that have been clas-

    sified and the training side of the same image. The term accuracy here refers to degree of

    correctness of the classification.

    8.1.1 Training

    The description of training classes is an extremely important component of the classification

    process. In supervised classification, statistical processes (i.e. based on an a priori knowl-

    edge of probability distribution functions) or distribution-free processes can be used to extract

    class descriptors. Unsupervised classification relies on clustering algorithms to automatically

    segment the training data into prototype classes. In either case, the motivating criteria for

    constructing training classes is that they are:

    independent, i.e. a change in the description of one training class should not change the

    value of another,

    discriminatory, i.e. different image features should have significantly different descriptions,

    reliable, all image features within a training group should share the common definitive de-

    scriptions of that group.

    A convenient way of building a parametric description of this sort is via a feature vec-

    tor(v1, v2, ...vn), where n is the number of attributes which describe each image feature and

    training class. This allows us to consider each image feature as occupying a point, and each

    training class as occupying a sub-space (i.e. a representative point surrounded by some spread,

    or deviation), within the n-dimensional classification space. Viewed as such, the classification

    problem is that of determining to which sub-space class each feature vector belongs.

    8.1.2 Confusion matrix

    A confusion matrix is a specific table layout that allows visualization of the performance of

    an algorithm, typically a supervised learning one (in unsupervised learning it is usually called

    a matching matrix). Each column of the matrix represents the instances in a predicted class,

    while each row represents the instances in an actual class. The name stems from the fact that it

    Department of Electrical and Electronics Engineering, AIeMS 36

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    47/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    makes it easy to see if the system is confusing two classes (i.e. commonly mislabeling one as

    another). Confusion Matrix is also called as Error Matrix.

    There are many unique image classification methods available to extract rich source of infor-

    mation present in hyperspectral imagery. Most methods usually compare study image spectra

    with reference image spectra (Ground truth image). The reference spectra can be obtained by

    defining Regions of Interest, building spectral libraries, making measurements on field or by

    directly extracting it from image pixels.

    In this project,the following supervised classification algorithms have been applied;Spectral

    Angle Mapper Classifier, Mahalanobis Distance Classifier, Minimum Distance Classifier and

    Maximum Likelihood Classifier to classify the image into four different classes. The four

    classes are Water, Vegetation, Built Up Areas and Land. Before applying supervised methods,

    we need to train the dataset, about which, the explanation is given in the following section.

    8.2 Training Data Set

    In order to apply supervised classification algorithms on the images, a training dataset has to be

    created from the ground truth data. This will be used to train the classifier and after training,the

    remaining pixels of the image will be classified Half of the data from the ground truth was used

    to create the training datasets for supervised classification algorithms. The following steps have

    been followed to prepare the training datasets :

    A copy of ground truth is saved as an excel file as it is easier to group similar values in excel.

    Each pixel in ground truth is divided into 4 different classes and an additional class for

    unclassified data and each pixel will have 5 different values being 1- Water 2 - Vegetation

    3 - Barren Land 4 - Urban/Built up Area and 0 - Unclassified.

    Four new excel sheets containing co-ordinates of each pixel in the class are created for the

    four different classes.

    The first excel sheet for Water class is taken and an extra column is added using random

    number generator (Rand()) function in excel.

    Department of Electrical and Electronics Engineering, AIeMS 37

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    48/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    The entire sheet is sorted from the lowest to highest according to values in the new column.

    Half the number of pixels in the class, selected randomly, is stored in a different ASCII file.

    This procedure is repeated for all the remaining classes four classes and four files each con-

    taining randomly selected half of the ground truth pixels is created.

    Using ENVI 4.7, these pixels are read into four different regions of interests (ROIs) and

    thereby half of the data in ground truth is randomly selected as training datasets.

    Figure 8.1: Training dataset for ground truth image.

    Fig8.1 refers to training dataset with each class containing half the number of pixels in

    ground truth image.

    8.3 Spectral Angle Mapper

    The Spectral Angle Mapper(SAM) is a classification method that permits rapid mapping by

    calculating spectral similarity between image spectrums to reference reflectance spectras[22].

    The reference reflectance spectra can be either taken from laboratory or field measurements or

    Department of Electrical and Electronics Engineering, AIeMS 38

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    49/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    extracted directly from image. This technique was developed by Roberta, et al[22].

    SAM is a very powerful classification method because its relatively unaffected by surround-

    ing illumination conditions and targets highlights the target reflectance characteristics. But the

    drawback of this method is the spectral mixture problem. The most erroneous assumption

    made with SAM, is the assumption that the endmembers chosen to classify the image repre-

    sents pure spectra of reference material whereas as in actual practice there is mixture of pixels

    due to various physical phenomena.

    SAM compares image spectra to known spectra which is present in ground truth image. It

    takes arc cosine of the dot product between test spectrums t to reference spectrums r with

    the following equation

    = cos1

    nb

    i=1tiri

    (

    nb

    i=1t2i)(

    nb

    i=1r2i

    )(8.1)

    where, nb = number of bands

    ti= test spectrum

    ri= reference spectrum

    Smaller the spectral angle calculated, the correlation between the image spectrum and the

    reference spectrum increases. Pixels which are further away than the specified maximum angle

    threshold are not classified. The reference spectra for the Spectral Angle Mapper in our case

    were generated by the average spectral profiles for each of the training class dataset. They can

    also come from ASCII files, spectral libraries, statistics files.

    In an n-dimensional multispectral or hyperspectral space, a pixel vectorx has both magni-

    tude (i.e. length) and an angle measured with respect to axes that defines co-ordinate system

    of the space[23]. In SAM, only the angular information is used. It is based on the idea that

    an observed reflectance spectrum can be considered as a vector in n-dimensional space, where

    number of dimensions is equal to the number of spectral bands. If the overall illumination

    Department of Electrical and Electronics Engineering, AIeMS 39

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    50/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    increases or decreases, due to scattering of sunlight or shadows, the length of this vector will

    increase or decrease respectively, but the angular orientation between the two vectors will re-

    main constant.

    Figure 8.2: Illustration of the idea behind SAM.

    The concept of SAM can be explained with help of Fig8.2. Fig 8.2ashows that for a given

    feature type, the vector corresponding to its spectrum will lie along the line passing passing

    through the origin, with the magnitude of the vector being smaller(A) or greater (B) under

    lower or larger illumination, respectively. Fig 8.2bshows comparison of the vector for an un-known feature type (C) to a known material with reference measured spectral vector(D), and

    two features match if the angleis smaller than the specified tolerance value[24].

    One major drawback of SAM is that it fails if the vector magnitude is important in provid-

    ing discriminating information, which may happen in certain instances. However, if the pixel

    spectra from different classes are well distributed in feature space there is high likelihood that

    angular information alone will provide good separation.

    Fig 8.3 shows reference spectra generated from the training dataset Regions of Interest(ROIs)

    as described earlier in the chapter.

    Department of Electrical and Electronics Engineering, AIeMS 40

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    51/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    Figure 8.3: Endmember collection spectra generated from ground truth image.

    (a) SAM classification for QUAC Correction. (b) SAM classification for FLAASH Correction.

    Figure 8.4: SAM classified output images.

    Department of Electrical and Electronics Engineering, AIeMS 41

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    52/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    Confusion matrices were then generated for all SAM classified images to measure its effi-

    ciency. Ground truth image was taken as reference for generating confusion matrix with respect

    to classified output images for 196 band image and 5, 10, 15 band PCA images. These results

    were then tabulated into table form as shown

    Class name Sample Size Subset-196 PCA-15 PCA-10 PCA-5

    (Pixels) bands(%) bands(%) bands(%) bands(%)

    Water 2150 85.02 100 100 99.95

    Vegetation 49747 83.75 81.04 81.03 80.99

    Barren Land 116639 77.41 73.09 72.79 72.88

    Built Up Area 25033 92.67 92.88 92.77 92.06

    Overall 193569 81.0987 77.9839 77.7876 77.7355

    Table 8.1: SAM Classification efficiency for FLAASH corrected image.

    Class name Sample Size Subset-196 PCA-15 PCA-10 PCA-5(Pixels) bands(%) bands(%) bands(%) bands(%)

    Water 2150 84.20 99.58 99.58 99.58

    Vegetation 49747 83.74 81.88 81.88 81.85

    Barren Land 116639 74.77 74.12 74.11 96.35

    Built Up Area 25033 84.05 96.49 96.48 73.99

    Overall 193569 78.3785 79.2934 79.2836 79.1880

    Table 8.2: SAM Classification efficiency for QUAC corrected image.

    8.4 Mahalanobis Distance Classifier

    In statistics, Mahalanobis Distance is a distance metric, proposed by P.C.Mahalanobis in his

    landmark paper in 1936[25]. It is based on correlations between variables by which differ-

    ent patterns can be identified and analyzed. It gauges similarity of an unknown sample set to

    known one. It differs from Euclidian distance that it takes into account the correlations of the

    dataset and is scale variant[26].

    Mahalanobis distance is widely used in cluster analysis and classification techniques. In

    order to use the Mahalanobis distance to classify a test point as belonging to one of N classes,

    one first estimates the covariance matrix of each class, usually based on samples known to be-

    long to each class. Then, given a test sample, one computes the Mahalanobis distance to each

    class, and classifies the test point as belonging to that class for which the Mahalanobis distance

    is minimal.

    Department of Electrical and Electronics Engineering, AIeMS 42

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    53/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    8.4.1 Mathematical Background

    Formally, the Mahalanobis distance of a multivariate vector x = (x1, x2, x3, .....xN)T from a

    group of values with mean= (1, 2, 3, .....N)T and covariance matrixSis defined as

    DM(x) =

    (x)TS1(x) (8.2)

    Mahalanobis Distance or generalized squared interpoint distance for its squared value,

    can also be defined as a dissimilarity measure between two random vectors x,y of the same

    distribution with same covariance matrixS

    d(x, y) =

    (x y)TS1(x y) (8.3)

    If the covariance matrix is the identity matrix, the Mahalanobis distance reduces to the

    Euclidian distance. If the covariance matrix is diagonal, then resulting distance measure is

    callednormalized Euclidian distance:

    d(x, y) =

    N

    i=1

    (xiyi)2

    s2i

    (8.4)

    wheresiis the standard deviation of thexiand yiover the sample set.

    Fig8.5shows classification output for the Mahalanobis distance classification on FLAASH

    and QUAC corrected images.

    Confusion matrices were then generated for all Mahalanobis Distance classified images to

    estimate its efficiency. Ground truth image was taken as reference for generating confusion

    matrix with respect to classified output images for 196 band image and 5, 10, 15 band PCA

    images. These results were then tabulated into the form of a table as shown below

    Department of Electrical and Electronics Engineering, AIeMS 43

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    54/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    (a) Mahalanobis classification for QUAC Correction. (b) Mahalanobis classification for FLAASH Correction.

    Figure 8.5: Mahalanobis Distance classified output images.

    Class name Sample Size Subset-196 PCA-15 PCA-10 PCA-5

    (Pixels) bands(%) bands(%) bands(%) bands(%)

    Water 2150 91.45 84.01 82.39 71.19

    Vegetation 49747 86.51 85.40 85.57 84.82

    Barren Land 116639 87.17 85.64 80.53 79.95

    Built Up Area 25033 88.56 88.60 91.92 86.55

    Overall 193569 87.2305 85.9416 83.3208 81.9570

    Table 8.3: Mahalanobis Distance Classification efficiency for FLAASH corrected image.

    Class name Sample Size Subset-196 PCA-15 PCA-10 PCA-5

    (Pixels) bands(%) bands(%) bands(%) bands(%)

    Water 2150 92.61 97.63 97.82 99.21

    Vegetation 49747 86.20 85.13 84.86 84.90

    Barren Land 116639 86.98 83.12 82.75 81.21

    Built Up Area 25033 88.31 92.08 92.49 95.83

    Overall 193569 87.0161 84.9569 84.7178 84.2497

    Table 8.4: Mahalanobis Distance Classification efficiency for QUAC corrected image.

    8.5 Minimum Distance Classifier

    The Minimum Distance classification is used in many remote sensing methods like crop species

    identification, land pattern identification etc. Minimum Distance classifiers belong to family

    of classifiers referred to as sample classifiers. In such classifiers the items classified are groups

    Department of Electrical and Electronics Engineering, AIeMS 44

  • 8/10/2019 Hyperspectral Signal Processing to Identify Land Cover Pattern.pdf

    55/78

    2012-13 Hyperspectral Signal Processing to Identify Land Cover Pattern

    of measurement vectors(e.g. all measurement vectors from agricultural field), rather than indi-

    vidual vectors as in more conventional vector classifiers[27].

    Specifically in the Minimum Distance classification, a sample, i.e. a group of vectors, is

    classified into the class whose known or estimated distribution most closely resembles the es-

    timated distribution of the sample to be classified. The measure of resemblance is a distance

    measure in the space of distributed functions.

    Minimum distance classification resembles what is probably the the oldest and simplest ap-

    proach to pattern recognition, namely template matching. In template matching a template

    is stored for each class or pattern to be recognized (for e.g. letters of an alphabet and an un-

    known pattern (e.g. an unknown letter) is then classified into the pattern class whose template

    best fits pattern on basis of some previously defined similarity measure. In minimum distance

    classification the templates and unknown patterns are distribution functions and the measure of

    similarity used is a distance measure between distribution functions.

    Thus, an unknown distribution is classified into the class whose distribution function is near-

    est to the unknown distribution in terms of some predetermined distance measure. In practice

    the distribution functions involved are usually not known, nor can they be observed directly.

    Rather a set of random measurement vectors from each distribution of interest is observed and

    classification is based