blind separation of spectral signatures in hyperspectral imagery

Download Blind separation of spectral signatures in hyperspectral imagery

If you can't read please download the document

Upload: p-y

Post on 19-Sep-2016

214 views

Category:

Documents


1 download

TRANSCRIPT

  • Blind separation of spectral signatures in hyperspectral imagery

    T.-M.Tu, P.S.Huang and P.-Y.Chen

    Abstract: For the purpose of material identification, methods for exploring hyperspectral images with minimal human intervention have been investigated. Without any prior knowledge, it is extremely difficult to identify or determine how many endmembers in a scene. To tackle this problem, a new spectral unmixing technique, the spectral data explorer (SDE), is presented in the paper. SDE is a hybrid approach combining the optimal parts of fast independent component analysis (FastICA) and noise-adjusted principal components analysis (NAPCA). Experimental results show that SDE is highly efficient for separating significant signatures of hyperspectral images in a blind environment.

    1 Introduction

    In remote sensing data exploitation, it is challenging to discriminate, quantify and identify multiple materials embedded in a mixed pixel. To solve this classification problem, mixing scales and linearity of distinct materials have been investigated by several researchers. The macro- spectral mixture [ 11 assumes no interaction between materials and models a mixed pixel as a linear combination of signatures resident with relative concentration. For microscopic or intimate mixtures [2], the mixing is gener- ally nonlinear. Although many surface materials are mixed in nonlinear fashion due to a second order effect, linear unmixing techniques, while at best an approximation, appear to work well in many circumstances [3-71. However, a primary drawback of these linear unmixing methods is the need for a priori knowledge of substance signatures resident in the images.

    To address this problem, Bayliss et al. [8] introduced a contextual independent component analysis (ICA) approach [9] in the context of hyperspectral data analysis, and discussed the benefits and drawbacks of using this approach to separate spectrally unmixing minerals in an image scene. Their results show that ICA offers a new tool to separate unsupervised signatures in hyperspectral images. However, under the assumption that the number of sources is known a priori, separation of source signals can be efficiently performed by various ICA algorithms in a fully parallel manner. Nevertheless, in practice, the number of sources is often unknown or unequal to the number of sensors, especially when ICA is used to analyse hyperspectral images, where data dimensionality is over- whelmingly larger than the number of sources. Therefore, a practical, feasible solution herein is to sequentially (one-by-one) separate source signals when the number of sources is known.

    For remote sensing images, within the instantaneous field of view, the limited spatial resolution of scanning sensors frequently leads to the presence of more than one ground cover type. Therefore, since each pixel with spatial coverage often contains multiple materials, identifying the number of sources (endmembers) is equivalent to deter- mine the intrinsic data dimensionality rather than the number of clusters constituted by distinct pixels. Conven- tionally, intrinsic dimensionality is estimated by detecting gaps within singular values. Using principal component analysis (PCA) to estimate the number of sources becomes relatively easy if the signal-to-noise ratio (SNR) is large enough. However, if some sources are weak or the noise power is not negligible, to estimate the correct number of sources by using PCA becomes difficult. This is certainly true for remote sensing imagery, owing to a variety of unknown noises and unexpected interferences from the atmosphere.

    In light of the preceding facts, this paper proposes a novel approach, the spectral data explorer (SDE), which is advantageous for maintaining optimal parts of two methods, noise-adjusted principal components analysis (NAPCA) [ 10, 1 11, and fast independent component analy- sis (FastICA) [12, 131. From NAPCA, SDE inherits the leverage of the noise-whitening process, arranges principal components in descending order of image quality rather than variance as in PCA, and finds the correct number of sources. From FastICA, SDE inherits the ability of separ- ating sources in a blind environment. In order to show the effectiveness of SDE, based on two real hyperspectral image scenes, experiments are conducted for evaluation in this paper.

    Before the SDE approach is described, a brief review of ICA and NAPCA is presented in the following Section. Details of ICA and NAPCA algorithms can be found in [12-151 and [lo, 111, respectively.

    0 IEE, 2001 IEE Proceedings online no. 200103 14 DOI: 10.1049/ip-vis:20010314 Paper first received 7th March and in revised form 6th November 2000 The authors are with the Department of Electrical Engineering, Chung Cheng Institute of Technology, Ta-Shi, Taoyuan, Taiwan 33509, Republic of China

    IEE Proc.-Vis. Image Signal Process., Vol. 148, No. 4, August 2001

    2 Interpreting ICA model as unsupervised linear mixture spectral model

    In remote sensing imagery, linear spectral unmixing is an approach widely used to discriminate, identify and quan- tify individual spectral signatures in a mixed pixel. Let ri be an l x 1 column vector denoting the ith pixel in a

    217

  • hyperspectral image, where 1 is the number of bands. A linear mixture spectral (LMS) model for pixel Y; in a hyperspectral image can be described by [ 3 ] :

    Y, = M a , + n; = s; + n; (1) and the following correlation matrix

    R = E[v,r;] = ME[a ,aT]MT + R, = Rs + R, (2) where M is an 1 x p matrix denoted by (m, , m 2 , . . . , mp) and mi (i = 1,. . . , p ) is an 1 x 1 column vector for the spectral signature of the ith distinct material, p denotes the number of materials, ai is a p x 1 column vector given by ( M , , a2,. . . , M , , ) ~ , where G ! ~ ( j = 1 , . . . , p ) represents the fraction of thejth signature in ri, and ni is an 1 x 1 column vector which represents the combined noise, a wide sense stationary Gaussian process with zero mean and covariance matrix Rn .

    To estimate the unknown signature abundance, the knowledge of signatures in M must be given a priori when using the conventional linear unmixing technique. In contrast to the LMS model, ICA is used to recover independent sources, given only sensor observations with unknown linear mixtures of the unobserved independent source signals. Determining the number of signatures is required both in the LMS model and blind source separa- tion by using ICA. Thus, the signatures in the LMS model can be regarded as the independent sources in the ICA model.

    The basic ICA data model is given by

    Y; = Az; + n; (3) Here, the 1 -dimensional vector of measured signals, ri=[rI . . . r l l T , is assumed to be generated from a p-dimensional vector of source signals, z , = [ z , . . . z J T , through a linear mixing matrix A , where A is an unknown 1 x p ( l > p ) matrix of f u l l rank. Source signals z j are unknown but mutually independent and of zero mean.

    The fundamental problem of ICA is to estimate the source signals z; from mixtures ri or, equivalently, to find a p x 1 separating matrix B so that the p-vector

    y , = Br; (4)

    is an estimate of z ; . Of course, the lack of information on the structure of A must be compensated by some additional assumptions for source signals. The fundamental restric- tion of ICA is that the independent components must be nongaussian for ICA to be possible. This restriction is mainly followed by the central limit theorem, which states that the distribution of the sum of independent random variables tends to be gaussian, under certain conditions. Estimating one of the independent components from a mixture is usually achieved by the measure of nongaus- sianity. The simplest nongaussianity measure is the abso- lute value of kurtosis. The kurtosis is a nondimensional quantity. It measures the relative peakedness or flatness of a distribution. For a random variable y of normalized gaussian, its kurtosis is zero, but the kurtosis of nongaus- sian random variables is nonzero. Those random variables with positive kurtosis are called supergaussian, and those with negative kurtosis are called subgaussian. Indeed, kurtosis has been widely used as a measure of nongaus- sianity to separate various independent components. As stated in [8], almost all material signatures in hyperspectral images have a nongaussian distribution. Therefore, ICA could be a very effective means to distinguish them.

    218

    To estimate B in eqn. 4, it is essential to firstly prepro- cess the data vectors r; by whitening them. That is

    xi = W,ri ( 5 )

    in which

    (6) E[xJ = 0 and E[xixr] = I

    E(.) is an expectation o erator, x denotes the whitened data vector, and W,. =D,''@T is a p x 1 whitening matrix, which can simultaneously reduce the dimension of the data vector from 1 to p . The p x p diagonal matrix D, = [ A , , . . . , A p ] , and @, = [al , . . . , represent the eigenvalue and eigenvector matrices of R, respectively. Therefore, the problem of finding a p x I arbitrary matrix B in eqn. 4 is then reduced to finding a p x p orthogonal matrix since

    (7) y i = Wx; = WW,rj = Br;

    and

    By assuming that the number of sources is known, or equal to the number of sensors, various adaptive ICA algorithms for learning from either the separating matrix B or matrix Wafter prewhitening are available. Herein, we focus on FastICA [12, 131, a simple form of the generalised fixed-point algorithms that can be applied to sub-gaussian and super-gaussian sources, simultaneously. The separa- tion of sources can be obtained by maximising the approxi- mated kurtosis criterion

    J(w) = E { G ( w 5 ; ) } (9)

    where w is a p x 1 weight vector and G is the objective or contrast function. To maximise the approximated kurtosis, G is generally a nonlinear function, such as G(y)= In cosh( y ) or G( y ) =y4. To recover the independent components, we assumed E { ( w ' x ; ) } ~ = 1 while xi is prewhitened. This implies that w T w = 1 and the total weight matrix W is an orthonormal matrix. Consequently, we can rewrite eqn. 9 as a constrained optimisation problem:

    maxJ(w) subject to: 11 w )I= 1 (10)

    The solution for this constrained problem is given by

    E{x;g(wTx;)} - pw = 0 ( 1 1)

    where g(.) = G'(.). Eqn. 11 describes a cancelling of the gradient between the objective function and the active constraint at the solution point wo. For cancelling the gradient, a Lagrange multiplier p is needed to balance the deviation in magnitudes of the objective function and constraint gradient. As such, p can be easily obtained by

    To process a stable learning algorithm, we can use Newton's method to solve eqn. 11. After simplification, a fixed-point ICA algorithm is obtained by

    p =E{ wo'x; g(w;x;)}.

    w+ = E(x;g(wTx;)} - E{g'(wTxi)}w, w* = w+/ 11 w+ 1) (12)

    where w* denotes the new value of w. More details of this algorithm can be found in [12, 131.

    IEE Proc.-Vis. Image Signal Process., Vul. 148, No. 4. August 2001

  • 3 Determining intrinsic dimensionality by NAPCA-based approaches

    As stated above, the number of sources must be known a priori before using ICA to extract independent compo- nents. In current ICA approaches, PCA is the most widely used method to reduce data dimensionality by selecting components with large variance. However, some minor components with small variance may contain relevant information rather than only noise or unimportant variance. In order to address this issue, Green et al. [lo] proposed NAPCA to arrange principal components in descending order of image quality rather than variance. The NAPCA approach can be regarded as a two-stage, cascaded prin- cipal component transformation with a diagonalisation procedure [I61 used to achieve the maximum signal-to- noise-ratio (MSNR), i.e. to derive a matrix X such that

    X T R s X = max ~

    XTRX max ~

    x XTR,X .Y XTR,,X + I (130)

    or equivalently

    X T R X = A and X T R , X = I (136)

    where R is the covariance matrix and R=E[rirT] = R s + R , .

    To obtain the desired transformation in eqn. 136, a whitening process can be designed to simultaneously transform R, and R such that

    WTR W , = WTR, W , + WTR, W , = + I = R,dj (14)

    where W, = denotes the transformation matrix, in which A, and @, represent eigenvalue and eigenvector matrices of R, , respectively. The adjusted covariance matrix RLl4 is generally not diagonal but symmetric. Using the eigenvectors of Ra4 , i.e. aU4, as the basis of the second transformation leads to

    @a'd/Rag %d/ = A U 4 (15)

    Consequently, the desired NAPCA transform can be derived by

    The subsequent transformed covariance matrix is then expressed as

    R y 3 N A p , = X ~ R X = diag[A,, A2, . . . , ap, . . . , a,] (17)

    where { R , } ~ = l = x i + 1 and {A,i}:=p+l = 1. are associated eigenvalues for the signal correlation matrix Rs , and ' 1 ' is a constant energy from the noise-whitening process. Consequently, the inherent data dimensionality can be determined by examining the number of eigenvalues larger than unity.

    However, the NAPCA is largely limited in the noise whitening process that requires complete knowledge of noise structure in the processed data. That is, from the available data, the noise covariance matrix must be accu- rately estimated by NAPCA. The inaccuracy in the noise estimation degrades the validity of calculating the intrinsic dimensionality. To cope with this problem, we apply NAPCA to a partitioned data space to resolve the inaccu- racy of the noise estimation and properly estimate the data dimensionality. This approach is referred to herein as PNAPCA. In contrast to PCA or NAPCA, which considers

    IEE Proc-Vis. Image Signal Process., Vol. 148, No. 4, August 2001

    the interrelationship within a set of variables, PNAPCA focuses on the relationship between two distinct subspaces partitioned from the data space of the original image by a simultaneous transformation. The motivation of this notion is that the intrinsic dimensionality is invariant to the number of processed bands if this number markedly exceeds that of endmembers. Hence, although the gap between the group of eigenvalues for signals and noises is difficult to detect in the entire data space, such a gap provides a valuable insight into two smaller, partitioned, and distinct subspaces. Moreover, the intrinsic dimension- alities of these two subspaces are identical. Functionally, the PNAPCA can be regarded as a covariance version of canonical correlation analysis (CCA).

    Step 1. Partition the noise-adjusted covariance matrix Radj as follows:

    The PNAPCA algorithm is described as below:

    Since R I and R , are symmetric and positive definite matrices, each of these matrices can be diagonalised by a unitary similarity transformation

    @LIRl@RI = AR1 and @L2R2@,, = A R ~

    where = [ P I , ~ 2 , . . . ,Pkl and @R2 = [ q ! , q 2 , . . . q k l are unitary matrices whose columns are the eigenvectors of R I and R 2 , respectively. The diagonal of two matrices, ARI and A R ~ , are the respective eigenvalues of matrices R I and R2 arranged in descending order. Step 2. Construct a unitary transformation matrix Q to rotate Ra4. This unitary matrix is defined by

    Step 3. The resulting PNAPCA transformation is then given by

    RY,PNAPCA QTRadjQ

    1 QilRIQRI QLIRI2QR2 = [ Q ~ ~ R z I Q R I Q L z R ~ Q R ~ =[::: 21

    AR2 = diag[(j, + 6, + I ) , . . . , ( j p + ip + l), (ip+l + I), . . . , ( i k 2 + I)] (216)

    212 =

    - Y 1 . 1 Y 1 , 2 " . Y2,l Y 2 . 2 . . .

    . . . Yp.1 Yp.2 . . . Yp.p Y p + l , l Yp+l,2 ' . . Yp+l.p+l ...

    219

  • Fig. 1 Subsection of IPTS scene (0.6675 pm channel)

    and

    2 2 , = 2 T 2 (2 1 4

    Here, kl + k2 = 1, and 1 is the dimension of R. Without loss of generality, we assume that p

  • a

    C

    e

    Fig. 4 9

    Independent components found by applying SDE Components 1-8 are as shown in a-h, respectively a Vegetation and vegetation-mowed b Grass, woods and hay-windrowed c Road, stone-steel tower, and buildings d Wheat e Vegetation-mowed f Grass/pasture g Trees h Soil

    b

    d

    f

    h

    IEE Proc-Vis. Image Signal Process., %I. 148, No. 4, August 2001 22 1

  • two sets. Hence, eqn. 23 can be rewritten as a form of set operation:

    $ i 5 (ij U E; U 1) n (;i j U iii U 1) = ( ; Z j n X i ) U ( X i n i i) U (ij n I )

    U (hi n ; l j ) U (hi n i i ) U (hi n 1) U ( I n l i ) u ( i ni i i )u( l n 1)

    = (ij n X i > U I = (ii n Xi) + 1 i = 1 , 2 , . . . , p (24a)

    and y:i I (ii U I ) n (E; U 1)

    = (hi n E i ) U ( E i n 1) U ( E i n 1) U (1 n 1) = 1 i = p + l , p + 2 , . . . , kl (24b)

    Step 4. Since R, and R, given in eqn. 18 are correlated with each other, ?.e. R I 2 = R,, # 0, the intersection of two signal energies (A, n A,) in eqn. 24a should be greater than zero. This observation implies that y:, > 1, i = I , . . . ,p . Therefore, the statistical threshold for the new hypothesis test H,,, is y $ 5 1. Restated, the number of endmembers can be determined by simply counting the number of diagonal elements in eqn. 24 with greater value than unity.

    4 Proposed spectral data explorer

    After obtaining the number of sources, the next step is to whiten the observed data and perform an ICA classifica- tion. However, PCA is still inappropriate for data whiten- ing in hyperspectral images, as attributed to the fact that a transformed band with small variance does not imply poor image quality; it may be caused by a significant source. To avert this drawback, this study proposes a noise-adjusted version of the data-whitening transformation to replace eqn. 5 , that is

    yhere AOd/ is a p x p leading principal submatrix of sad/ is an I x p matrix, the columns of which are the corresponding orthonormal eigenvectors of sad, . AadJ and

    are then the eigenvalue and eigenvector matrices of Rad, , respectively. Several advantages resulting from eqn. 25 are significant. It arranges principal components in descending order of image quality rather than variance in eqn. 5 , and reduces the noises energy to unity.

    To carry out the classification of hyperspectral images without any ground-truth information, the spectral data explorer (SDE) procedure should be performed as follows:

    Step 1. Estimate the noise covariance matrix R, from the hyperspectral image cube. The simplest method to make such an estimation is the shift difference approach, which is a sub-function available in the commercial package ENVI [ 171. This approach assumes that each pixel contains both signal and noise and, in doing so, adjacent pixels contain the same signal, but a different noise. The shift difference is performed on the data by differencing adja- cent pixels to the right and above each pixel and averaging the results to obtain the noise value to assign to the pixel being processed. However, the optimal noise estimate is derived from the shift-difference statistics of a homoge- neous area rather than the entire image. Step 2. Perform NAPCA or PNAPCA and determine the number of sources to be p .

    222

    Fig. 5 Urban scene in Killeen, Texas (2.044 pm channel)

    I I I I a

    1 0 y

    18,

    1 00 , O Or

    b

    lo

    z 30 40 50 10 20

    1 0-5o

    no. of eigenvalues C

    Fig. 6 (experiment 2) a PCA b NAPCA c PNAPCA with UlMT

    Results of source number detection for the three techniques

    IEE Proc.-Vis. Image Signal Process.. Vol. I48, No. 4, August 200I

  • a

    C

    fig. 7 Principal component inmges a 7th PCA component b 8th PCA component c 9th PCA component d loth PCA component

    Step 3. Prewhiten observed data by eqn. 25 and simulta- neously reduce data-dimensionality to p . Step 4. Take a random initial vector w(0) of norm 1. Let k= I , source number = 1. Step 5. Let w(k+ l)=f?{xig(w(k)rx,)} - f?{g(w(k)Tx,)}w(k). The expectation can be estimated by using the entire image cube. Step 6. Divide w(k) by its norm. Step 7. If I w(k+ l)Tw(k) I is not close enough to 1, set k = k+ 1 and go back to step 4. Otherwise, output the vector w(k) and deflated data, and go back to step 3 for next processing. After outputting p vectors, the task is completed.

    After the optimal projection matrix W is obtained the task of source separation can be accomplished by eqn. 7.

    5 Experimental results

    To evaluate SDE, two sets of hyperspectral image data from NASA/JPL AVIRIS (airborne visible/infra-red imaging spectrometer) and NRL HYDICE (hyperspectral digital imagery collection experiment) data are used. When applying SDE to the hyperspectral image cube, there is no need to calibrate the radiance spectra to reflectance; instead, operate directly on the measured radiance. The only alteration to the data is the removal of bands related to the water absorption regions and of low SNR since they have no useful energy.

    IEE Proc.-Vis. Imuge Sigrzal Process., Vol. 148, No. 4, Azigust 2001

    5. I Experiment I : (AVlRlS data) The data set used in the first experiment is a June 1992 AVIRIS data set of a mixed agriculture/forestry landscape in the Indian Pine Test Site (IPTS), Northwestern Indiana. The IPTS image is a 200 x 200 pixel scene. Fig. 1 depicts its image of the 0.6675 pm band. Bands corresponding to the water absorption regions, low SNR with no useful energy, and bad bands had been removed before proces- sing, which leaves 176 bands in this investigation. The ground sampling distance (GSD) of this image is 25 m.

    To find significant source numbers, three methods are evaluated: PCA, Greens NAPCA method, and the PNAPCA approach. Fig. 2 displays their results. In Fig. 2 4 only the first six eigenvalues produced by PCA can be clearly separated, since the gap among other consecutive signal and noise eigenvalues is inadequately large (in this Figure and the remaining experiments, for clarity, only the first 50 eigenvalues are plotted). Figs. 26 and c summarise the results from NAPCA and PNAPCA with UIMT, respectively. All of them can estimate the number of significant sources up to eight. As mentioned earlier, the main drawback of PCA is that a transformed band with small variance does not imply poor image quality; it may be a high SNR band in which others are of large variance or low SNR bands. This fact can be clearly indicated in Fig. 3, where the 8th PCA component (Fig. 3a) contains most noise, but the 14th PCA component (Fig. 3h) has a signal with better SNR. Therefore, the NAPCA is more advantageous in terms of resolving the inherent dimen- sionality problem. Fig. 4 demonstrates the independent components discovered by applying the ICA part of the SDE procedure (steps 4-7), where most of the information has been included into the first eight noise-adjusted PCA

    223

  • a b

    C d

    e f

    Fig. 8 Components 1-6 are shown in a-f; respectively a Grass 1 b Tree 1, road 1 and buildings c Metallic materials d Paver brick (grey) e Roof f Tree 2 and grass 2

    Independent components found by applying SDE: components 1-6

    components. Vegetation and vegetation-mowed were clearly isolated in ICA component 1, as shown in Fig. 4a. ICA component 2 in Fig. 4b is shared by the grass, woods and hay-windrowed. Road, stone-steel tower and buildings were extracted into ICA component 3, as displayed in Fig. 4c. ICA component 4 in Fig. 4d contained the wheat. Soybean-clean was isolated as ICA component 5 (Fig. 4e). ICA component 6 in Fig. 4f contained the grass/pasture information. Trees and soil were extracted into ICA component 7 and 8, as displayed in Figs. 4g and h, respectively. The results of ICA mineral mapping presented here agree with other remote sensing analysis of this region [18].

    5.2 Experiment 2: (HYDICE data) The data used in the second experiment is an urban scene in Killeen, Texas, taken by a HYDICE sensor in May 1996.

    This is a 307 x 307 pixel scene containing 184 bands. Fig. 5 illustrates this image of the 2.044 pm band. Its GSD is approximately 3 m.

    Again, these three techniques are directly applied to the Killeen image to find significant source numbers. Fig. 6 displays these results. As shown in Fig. 6a, results obtained from PCA give clear recognition only for the first eight signal eigenvalues. Inevitably, the drawback of PCA still appeared in HYDICE image. Fig. 7 demonstrates the PCA components 7-1 0. The signal-to-noise-ratio of the compo- nents 7 and 10 are better than components 8 and 9. Both NAPCA and PNAPCA with UIMT can estimate the number of significant sources up to ten. However, the gap between the group of signal and noise eigenvalues from NAPCA in Fig. 6b is much more ambiguous than that from PNAPCA in Fig. 6c. Figs. 8 and 9 summarise the ICA components extracted through the use of steps 4-7 in the

    224 IEE Proc.-Vis. Imuge Signal Process., Vol. 148, No. 4, August 2001

  • a b

    C

    Fig. 9 Components 7-10 are shown in a-d, respectively a Road 2 (sand and gravel) b Road 3 (soil) c Paver brick (red) d Tree 3

    Independent components joimd by applying SDE: components 7-1 0

    d

    SDE procedure. As shown, grass 1 was detected as the ICA component 1 (Fig. Sa). ICA component 2 is shared by tree 1, road 1 and buildings, as displayed in Fig. 8b. ICA component 3 detects responses from metallic materials. For instance, cars on the highway have been extracted in Fig. 8c. ICA components 4-10 are, in turn, paver brick (grey), roof, tree 2 and grass 2, road 2 (sand and gravel), road 3 (soil), paver brick (red), and tree 3 , as shown in Figs. Sd-f and 9a-d. The resulting component maps are consistent with known attributes of the scene that have been deter- mined by field measures [ 191.

    6 Conclusions

    With the increasing use of hyperspectral images, however, it becomes necessary to exploit techniques that are capable of quickly reducing the massive volume of data, while simultaneously preserving most information. To cope with this problem, a novel approach, the spectral data explorer (SDE), has been developed for the detection and classifica- tion of materials. The power of SDE is based on the noise- adjusted properties of NAPCA in information packing and the source separation capability of ICA in a blind environment. As a result, the proposed method offers a solution to signature extraction and classification with limited knowledge of ground truth.

    7 Acknowledgments

    The authors would like to thaflk Professor D.A. Landgrebe of Purdue University for providing the AVIRIS data set of the Indian Pine Test Site. They also thank the US Army Topographic Engineering Center for providing the

    IEE Proc.-Vis. Image Signal Process., Vol. 148, No. 4, August 2001

    HYDICE data of Killeen, Texas. Finally, they would like to thank the National Science Council of the Republic of China for financially supporting this work under contract no. NSC 89-2213-E-014-013.

    8

    I

    2

    3

    4

    5

    6

    7

    8

    9

    References

    STNGER, R.B., and McCORD, T.B.: Mars: Large scale mixing ofbright and dark surface materials and implications for analysis of spectral reflectance. Proceeding of 10th Lunar and Planctaly Science Confer- ence, 1979, pp. 1835-1848 SINGER, R.B.: Near-infrared spectral rcflectance of mineral mixtures: Systematic combinations of pyroxenes, olivine, and iron oxides, JT Geophys. Res., 1981, 86, pp. 7967-7982 ADAMS, B., and SMITH, M.O.: Spectral mixture modeling: a new analysis of rock and soil types at the Viking Lander 1 site, 1 Geophys.

    SHIMABUKURO, Y.E., and SMITH, J.A.: Least squares mixing models to generate fraction images derived from multispectral data, IEEE Truns. Geosci. Remote Sens., 1991, 29, (4), pp. 16-20 TU, T.M., CHEN, C.H., and CHANG, C.4: A posteriori least squares orthogonal subspace projection approach to desired signature extraction and detection, IEEE Trans. Geosci. Remote Sens., 1997, 35, (I), pp. 127-139 TU, T.M., SHYU, H.C., LEE, C.H., and CHANG, C.-I: An oblique subspace projection approach to subpixel classification in hyperspectral images, Pattern Recognit., 1999,32, (8), pp. 1397-1406 TU, T.M., CHEN, C.H., and CHANG, C.-I: A noise subspace projection approach to target signature detection and extraction in unknown back- ground for hyperspectral images, IEEE Trans. Geosci. Remote Sens., 1998, 36, ( l ) , pp. 171-181 BAYLISS, J., GUALTIERI, J.A., and CROMP, R.F.: Analyzing hyper- spectral data with independent component analysis. Proceedings of SPIE Applied Image and Pattem Recognition Workshop, 1997 (http://www.cs.rochester.eduluibayliss/spectral/spectral.html) PEARLMUTTER, B.A., and PARRA, L.C.: A context-sensitive gener- alization of ICA. Proceedings of International Conference on Neural Information ProcessinP. Honz Kong. 1996. DU. 15 1-157

    Res., 1986,91, pp. 8098-8112

    I O &&EN, A.A., B E G A N , h., S%ITZER,P., and CRAIG, M.: A transformation for ordering multispectral data in terms of image quality with implications for noise removal, IEEE Trans. Geosci. Remote Sens., 1988, 26, ( l ) , pp. 65-74

    225

  • 11 LEE, J.B., WOODYATT, A S . , and BERMAN, M.: Enhancement of high spectral resolution remote sensing data by a nolse-adjusted principal components transform, IEEE Trans. Geosci. Remote Sens.,

    12 HYVARINEN, A.: A family of fixed-point algorithm for independent component analysis. Proceedings of IEE International Conference on Acoustics, Speech and Signal Processing, 1997, pp. 39 17-3920

    13 HYVARINEN, A.: Fast and robust fixed-point algorithm for indepen- dent component analysis, IEEE Trans. Neural Netw., 1999, 103, pp. 626-634

    14 COMON, P.: Independent component analysis, a new concept?, Signal Prucess., 1994, 36, (3), pp, 287-314

    1990,,,28, pp. 295-304

    15 JUTTEN, C., and HERAULT, J.: Blind separation of sources, part I: an adaptive algorithm based on neuromimetic architecture, Signal Process., 1991, 24, pp. 1-10

    16 FUKANAGA, K.: Introduction to statistical pattern recognition (Academic Press, New York, 1990, 2nd edn.)

    17 ENVI users guide, version 3.2. 1999, Research Systems Inc., USA (http://www.rsinc.com/)

    18 LANDGREBE, D.: Multispectral data analysis: a signal theory per- spective. 1994, School of Electrical Engineering, Purdue University (http://dynamo.ecn.purdue.edu/-biehl/MultiSpec)

    19 Hypecube users manual, version 4.1. 1999, US Army Topographic Engineering Center (http://www.tec.army.mil/Hypercube/)

    226 IEE Proc-Vis. Image Signal Process., Vol. 148, No. 4. August 2001