advanced applications of hyperspectral imaging technology for food quality and safety analysis and...

14
Advanced applications of hyperspectral imaging technology for food quality and safety analysis and assessment: A review Part I: Fundamentals Di Wu, Da-Wen Sun Food Refrigeration and Computerised Food Technology (FRCFT), School of Biosystems Engineering, University College Dublin, National University of Ireland, Agriculture & Food Science Centre, Beleld, Dublin 4, Ireland abstract article info Article history: Received 23 February 2012 Accepted 21 April 2013 Editor Proof Received Date 24 May 2013 Keywords: Hyperspectral imaging Imaging spectroscopy Food quality Food safety Image processing Image analysis Spectrometry By integrating two classical optical sensing technologies of imaging and spectroscopy into one system, hyperspectral imaging can provide both spatial and spectral information, simultaneously. Therefore, hyperspectral imaging has the capability to rapidly and non-invasively monitor both physical and morpho- logical characteristics and intrinsic chemical and molecular information of a food product in the purpose of quality and safety analysis and assessment. As the rst part of this review, some fundamental knowledge about hyperspectral imaging is reviewed, which includes the relationship between spectroscopy, imaging, and hyperspectral imaging, principles of hyperspectral imaging, instruments for hyperspectral imaging, pro- cessing methods for data analysis, and discussion on advantages and disadvantages. Industrial relevance: It is anticipated that real-time food monitoring systems with this technique can be expected to meet the requirements of the modern industrial control and sorting systems in the near future. © 2013 Elsevier Ltd. All rights reserved. Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2. Relationship between spectroscopy, imaging, and hyperspectral imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3. Principles of hyperspectral imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3.1. Classes of spectral imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3.2. Hyperspectral cube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3.3. Acquisition of hyperspectral images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3.4. Image sensing modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 4. Hyperspectral imaging instruments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 4.1. Light sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 4.1.1. Halogen lamps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 4.1.2. Light emitting diodes (LEDs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 4.1.3. Lasers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 4.1.4. Tunable light sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 4.2. Wavelength dispersion devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 4.2.1. Filter wheels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 4.2.2. Imaging spectrographs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 4.2.3. Tunable lters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.2.4. Fourier transform imaging spectrometers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.2.5. Single shot imagers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.3. Area detectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.3.1. CCD detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4.3.2. CMOS detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4.4. Calibration of hyperspectral imaging system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Innovative Food Science and Emerging Technologies 19 (2013) 114 Corresponding author. Tel.: +353 1 7167342; fax: +353 1 7167493. E-mail address: [email protected] (D.-W. Sun).URL's:URL: http://www.ucd.ie/refrig, http://www.ucd.ie/sun (D.-W. Sun). 1466-8564/$ see front matter © 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.ifset.2013.04.014 Contents lists available at SciVerse ScienceDirect Innovative Food Science and Emerging Technologies journal homepage: www.elsevier.com/locate/ifset

Upload: karold-ragner-montejo-garcia

Post on 13-Sep-2015

221 views

Category:

Documents


3 download

TRANSCRIPT

  • ticipated that real-time food monitoring systems with this technique can be

    . . . .y, imagg . . .. . . .. . . .l image

    4.1. Light sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    Innovative Food Science and Emerging Technologies 19 (2013) 114

    Contents lists available at SciVerse ScienceDirect

    Innovative Food Science and Emerging Technologies4.1.1. Halogen lamps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44.1.2. Light emitting diodes (LEDs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54.1.3. Lasers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54.1.4. Tunable light sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    4.2. Wavelength dispersion devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54.2.1. Filter wheels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54.2.2. Imaging spectrographs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54.2.3. Tunable lters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64.2.4. Fourier transform imaging spectrometers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64.2.5. Single shot imagers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64.3. Area detectors . . . . . . . . .4.3.1. CCD detector . . . . . .4.3.2. CMOS detector . . . . .

    4.4. Calibration of hyperspectral imagin

    Corresponding author. Tel.: +353 1 7167342; fax:E-mail address: [email protected] (D.-W. Sun).URL'

    1466-8564/$ see front matter 2013 Elsevier Ltd. Allhttp://dx.doi.org/10.1016/j.ifset.2013.04.014s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43.4. Image sensing modes . . . . . .4. Hyperspectral imaging instruments . . .Image processingImage analysisSpectrometry

    Contents

    1. Introduction . . . . . . . . . .2. Relationship between spectroscop3. Principles of hyperspectral imagin

    3.1. Classes of spectral imaging3.2. Hyperspectral cube . . .3.3. Acquisition of hyperspectra. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2ing, and hyperspectral imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Food safety expected to meet the requirements of the modern industrial control and sorting systems in the near future. 2013 Elsevier Ltd. All rights reserved.Imaging spectroscopyFood quality Industrial relevance: It is anHyperspectral imaging and hyperspectral imaging,cessing methods for data anAdvanced applications of hyperspectral imaging technology forfood quality and safety analysis and assessment: Areview Part I: Fundamentals

    Di Wu, Da-Wen Sun Food Refrigeration and Computerised Food Technology (FRCFT), School of Biosystems Engineering, University College Dublin, National University of Ireland, Agriculture & Food Science Centre,Beleld, Dublin 4, Ireland

    a b s t r a c ta r t i c l e i n f o

    Article history:Received 23 February 2012Accepted 21 April 2013

    Editor Proof Received Date 24 May 2013

    Keywords:

    By integrating two classical optical sensing technologies of imaging and spectroscopy into one system,hyperspectral imaging can provide both spatial and spectral information, simultaneously. Therefore,hyperspectral imaging has the capability to rapidly and non-invasively monitor both physical and morpho-logical characteristics and intrinsic chemical and molecular information of a food product in the purpose ofquality and safety analysis and assessment. As the rst part of this review, some fundamental knowledgeabout hyperspectral imaging is reviewed, which includes the relationship between spectroscopy, imaging,

    principles of hyperspectral imaging, instruments for hyperspectral imaging, pro-alysis, and discussion on advantages and disadvantages.

    j ourna l homepage: www.e lsev ie r .com/ locate / i fse t. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7g system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    +353 1 7167493.s:URL: http://www.ucd.ie/refrig, http://www.ucd.ie/sun (D.-W. Sun).

    rights reserved.

  • ..

    .

    .

    .

    .

    .

    .

    .

    .

    .

    .

    .

    food safety and mandatory inspection of food products. The develop-ment of accurate, rapid and objective quality inspection systems

    ing are extremely advantageous for online inspection of agricultural bandpass lters, 3CCD (charge-coupled devices) camera is also common-

    2 D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114and food products to guarantee their quality and safety. Spectrosco-py is a promising method for determining the essential qualities offood products based on the measurement of optical properties(Bock & Connelly, 2008; Cen & He, 2007). However, spectroscopytechnique does not give information on spatial distributions of traitsin food products, which greatly limits its application to quantifyspatial-related distribution and structure related attributes. On theother hand, measurement of the external features of food productscan be achieved by conventional imaging system ormore specicallycomputer vision (Du & Sun, 2005, 2006; Sun & Brosnan, 2003; Wu &Sun, 2012; Zheng, Sun, & Zheng, 2006). However, because operatingat visible wavelengths in the forms of monochromatic or color im-ages, a conventional imaging system is incapable in inspecting spec-imens with similar color, classifying complex objectives, predicting

    ly used for acquiringmultispectral images. 3CCD has three discrete imagesensors and a dichroic beam splitter prism that splits the light into threespectral bands. Although the spectral resolution of multispectral imagingis lower than that of hyperspectral imaging, its acquisition speed is faster.The acquisition speeds of 3CCD camera are usually dozens frames per sec-ond, while it usually takes several seconds to measure a hyperspectralimage. There is no quantitative comparison between hyperspectralimage and ultraspectral image. It is usually believed that ultraspectral im-aging systems usually have a very ne spectral resolution.

    3.2. Hyperspectral cube

    Hyperspectral image is a three-dimensional (3D) hyperspectralcube (also called hypercube, spectral cube, spectral volume, datacube,throughout the entire food process is important for the food industryto ensure the safe production of food during processing operations andthe correct labeling of products related to the quality, safety, authentic-ity and compliance. Currently, human visual inspection is still widelyused, which however is subjective, time-consuming, laborious, tediousand inconsistent. Commonly used instrumental ways aremainly analyt-ical chemical methods, such as mass spectrometry (MS) and high per-formance liquid chromatography (HPLC). However, they have severaldisadvantages, such as being destructive, time-consuming, and unableto handle a large number of samples, and sometimes requiring lengthysample preparation. Therefore, it is critical and necessary to apply accu-rate, reliable, efcient and non-invasive alternatives to evaluate qualityand quality-related attributes of food products.

    Recently, optical sensing technologies have been researched as po-tential tools for non-destructive analysis and assessment for food qual-ity and safety. In particularly, by integrating both spectroscopic andimaging techniques into one system that can acquire a spatial map ofspectral variation, hyperspectral imaging (also called imaging spectros-copy or imaging spectrometry) has beenwidely studied and developed,resulting in many successful applications in the quality assessment offood products. A general overview of applications in quality determina-tion for numerous food products is introduced in the second part of thisreview.

    2. Relationship between spectroscopy, imaging, andhyperspectral imaging

    Non-contact optical techniques such as spectroscopy and imag-5. Hyperspectral image processing methods . . . . . . . . . . . . .5.1. Reectance calibration of hyperspectral images . . . . . . .5.2. Image enhancement and spectral preprocessing . . . . . . .5.3. Image segmentation . . . . . . . . . . . . . . . . . . . .5.4. Object measurement . . . . . . . . . . . . . . . . . . . .5.5. Multivariate analysis . . . . . . . . . . . . . . . . . . . .5.6. Optimal wavelength selection . . . . . . . . . . . . . . .5.7. Model evaluation . . . . . . . . . . . . . . . . . . . . .5.8. Visualization of quality images . . . . . . . . . . . . . . .

    6. Advantages and disadvantages of hyperspectral imaging . . . . . .7. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . .Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . .References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

    1. Introduction

    Food products with high quality and safety are always expected anddemanded by consumers, leading to the introduction of legislation forchemical components, and in detecting invisible defects.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    With the integration of the main advantages of spectroscopy andimaging, hyperspectral imaging technique can simultaneously acquirespectral and spatial information in one system that is critical for thequality prediction of agricultural and food products. Hyperspectral im-aging technique can be applied for quantitative prediction of the inher-ent chemical and physical properties of the specimen as well as theirspatial distribution simultaneously. If a conventional spectral measure-ment provides the answer to the question of what and a conventionalimaging provides the answer to the question of where, hyperspectralimaging can provide the answer to the question of where is what.Table 1 shows the main differences among imaging, spectroscopy, andhyperspectral imaging techniques.

    3. Principles of hyperspectral imaging

    A good understanding of the principles of hyperspectral imaging iscrucial for the use of this tool. Therefore, some fundamental knowledgeis introduced in this section.

    3.1. Classes of spectral imaging

    A spectral imaging system produces a stack of images of the same ob-ject at different spectral wavelength band. There are threemain classes inthe eld of spectral imaging, namely multispectral, hyperspectral, andultraspectral imaging. Their concept between these classes is similar.The main difference is the number of images within the spectral cube.Hyperspectral imaging systems acquire images with an abundance ofcontiguous wavelengths (normally less than 10 nm). There are usuallydozens or hundreds of images, which make every pixel in thehyperspectral image have its own spectrum over a contiguous wave-length range (Ariana & Lu, 2008b). Unlike hyperspectral imaging, multi-spectral imaging systems cannot provide a real spectrum in every imagepixel. Multispectral images usually have less than ten spectral bands,and some of them have dozens. Therefore, the spectral resolution of mul-tispectral imaging systems is usually larger than 10 nm. Besides usingand data volume), which is composed of voxels (also called vector

  • pixels) containing spectral information (of wavelengths) as well astwo-dimensional spatial information (of x rows and y columns). As anexample, the hyperspectral cube of a sh llet acquired using reec-tance mode is illustrated in Fig. 1. The raw hyperspectral cube consistsof a series of contiguous sub-images one behind each other at differentwavelengths (Fig. 1.a). Each sub-image provides the spatial distributionof the spectral intensity at a certain wavelength. That means ahyperspectral image described as I (x, y, ) can be viewed either as aseparate spatial image I (x, y) at each individual wavelength (), or asa spectrum I () at each individual pixel (x, y). From the rst view,any spatial image within the spectral range of the system can be pickedup from the hyperspectral cube at a certain wavelength within thewavelength sensitivity (Fig. 1.b). The gray scale image shows the differ-ent spectral intensity of the imaged object at a certain wavelength dueto the distribution of its corresponding chemical components. For ex-ample, an image within the hypercube at a single waveband centeredat 980 nmwith bandwidth of 5 nm (Fig. 1.b) can relatively show the in-formation of moisture distribution in the sh llet, which is difcult tobe observed in RGB image (Fig. 1.c). The pixels with highmoisture con-tents in this image appeared as the darkest parts since an absorption ofO\H stretching second overtones of water is around 980 nm. From thesecond view, the resulting spectrum of a certain position within thespecimen can be considered as its own unique spectral ngerprint ofthis pixel to characterize the composition of that particular pixel(Fig. 1.d).

    3.3. Acquisition of hyperspectral images

    There are four approaches to acquire 3-D hyperspectral image cubes

    format. For an image stored in BIP format, the rst pixel for allbands is in sequential order, followed by the second pixel for allbands, followed by the third pixel for all bands, etc., interleaved upto the number of pixels. This format is optimal for accessing the spec-tral information of each pixel. The disadvantages of whiskbroom arevery time-consuming for positioning the sample and need advancedrepositioning hardware to ensure repeatability. The second ap-proach illustrated in Fig. 2.b is called as line scanning method orpushbroom method, which records a whole line of an image as wellas spectral information simultaneously corresponding to each spatialpixel in the line. A complete hyperspectral cube can be obtained asthe line is scanned along the direction of x dimension (Fig. 2.b),and the obtained cube is stored in the format of band-interleaved-by-line (BIL). BIL format is a scheme for storing the pixel values ofan image in a le band by band for each line, or row, of the image. Be-cause of its characteristics of continuous scanning in one direction,line scanning is particularly suitable in conveyor belt systems thatare commonly used in food process lines. Therefore, line scanningis the most popular method of acquiring hyperspectral images forfood quality and safety inspection. The disadvantage of thepushbroom technique is that the exposure time can be set at onlyone value for all wavelengths. Such exposure time has to be shortenough to avoid saturation of spectrum at any wavelength, resultingin underexposure of other spectral bands and low accuracy of theirspectral measurement.

    The above two methods are spatial scanning methods, while thearea or plane scanning (also known as band sequential method orwavelength scanning) is a spectral scanning method as illustratedin Fig. 2.c. This approach keeps the image eld of view xed and ac-

    3D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114(I (x, y, )), which are point scanning, line scanning, area scanning, andthe single shot method, as illustrated in upper half of Fig. 2. In the pointscanning method (also known as the whiskbroom method), a singlepoint is scanned at one pixel to provide the spectrum of this point(Fig. 2.a), and other points are scanned by moving either the detectoror the sample along two spatial dimensions (x and y). Its obtainedhyperspectral cube is stored in the band-interleaved-by-pixel (BIP)Fig. 1. Schematic diagram of hyperspectral imagequires a 2-D monochrome image (x, y) with full spatial informationat a single wavelength at a time. Such scanning repeats over thewhole wavelength range, results in a stack of single band imagesstored in the band sequential (BSQ) format. As a very simple format,BSQ encodes each line of the image at the rst band is followed im-mediately by the next line in the same spectral band, followed bythe second band for all lines, followed by the third band for all(hyperspectral cube) for a piece of sh llet.

  • n b

    4 D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114lines, etc., interleaved up to the number of bands. This format pro-vides an easy access of spatial (x,y) access of at a single spectralband. As the detector is exposed to only a single wavelength eachtime, a suitable exposure time can be set for each wavelength. In ad-dition, the area scanning does not need to move either sample or de-tector and is suitable for the applications where the object should bestationary for a while, such as excitationemission in orescence im-

    Fig. 2. Acquisition approaches of hyperspectral images (scanning directions are showaging. A disadvantage of area scanning is that it is not suitable for amoving sample or the inspection of real-time delivery. At last, thesingle shot method records both spatial and spectral informationusing a large area detector with one exposure to capture the images(Fig. 2.d), making it very attractive when fast hyperspectral imagingis required. However, it is still in the early stage of development andhas limited resolutions for spatial dimension and narrow ranges forspectral dimension.

    3.4. Image sensing modes

    There are three common sensing modes for hyperspectral imaging,namely reectance, transmittance or interactance as illustrated inlower half of Fig. 2. Positions of light source and the optical detector(cameral, spectrograph, and lens) are different for each acquisitionmode. In reectance mode, the detector captured the reected lightfrom the illuminated sample in a specic conformation to avoid specu-lar reection (Fig. 2.e). External quality features are typically detectedusing reectance model, such as size, shape, color, surface texture andexternal defects. In transmittance mode, the detector is located in theopposite side of the light source (Fig. 2.f), and captures the transmittedlight through the sample which carries more valuable internal informa-tion but is often very weak (Schaare & Fraser, 2000). Transmittancemode is usually used to determine internal component concentrationand detect internal defects of relative transparent materials such assh, fruit, and vegetables. However, transmittance mode has a low sig-nal level from light attenuation and is affected by the thickness of sam-ple. In interactancemode, both light source and the detector are locatedin the same side of sample and parallel to each other (Fig. 2.g). On thebasis of such setup, the interactance mode can detect deeperinformation into the sample and has less surface effects compared to re-ectance mode. Meanwhile, the interactance mode reduces the inu-ence of thickness, which is a practical advantage over transmission. Itshould be noted that a special setup is required in the transmittancemode to seal light in order to prevent specular reection directly enter-ing the detector (Nicolai et al., 2007).

    y arrows, and gray areas shows data acquired each time) and image sensing modes.4. Hyperspectral imaging instruments

    Instrumentation of hyperspectral imaging is basic and important toacquire reliable hyperspectral images with high quality. Selection ofthe components of the instruments and the design of their setup andcalibration require a good understanding of the conguration and cali-bration of hyperspectral imaging system.

    4.1. Light sources

    Light sources generate light as an information carrier to excite or il-luminate the target, and are an essential part of optical inspection sys-tems. Typical light sources used in hyperspectral imaging systemsinclude halogen lamps, light emitting diodes, lasers, and tunable lightsources.

    4.1.1. Halogen lampsAs a broadband illumination source, halogen lamps are commonly

    used for the illumination of visible (VIS) and near-infrared (NIR) spec-tral regions. Typically, a lamp lament made of tungsten wire is placedin a quartz glass bulb lled with halogen gas such as iodine or bromine.The output light is generated from incandescent emission when the l-ament has a high temperature. Light is a smooth continuous spectrumin the range of wavelength from visible to infrared without sharppeaks. The halogen lamps work with low voltage, are considered as anall-purpose illumination sources. The tungsten halogen lamps havebeen used as illumination units in hyperspectral reectance measure-ments (Wu, Shi et al., 2012; Wu & Sun, 2013). In hyperspectral trans-mittance measurements, halogen lamps with high intensity have alsobeen used for detecting inside information of food (Ariana & Lu,

  • 5D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 1142008a). The disadvantages of halogen lamps include relatively shortlifetime, high heat output, spectral peak shift due to temperaturechange, output unstable due to operating voltage uctuations, and sen-sitivity to vibration.

    4.1.2. Light emitting diodes (LEDs)A LED is a semiconductor light source, which has advanced rapidly

    due to its advantages of small size, low cost, fast response, long lifetime,low frequency of bulb replacement, low heat generation, low energyconsumption, robustness, cool to the touch without concerning burn-ing, and insensitivity to vibration. LEDs are solid state sources withoutusing a lament for incandescent emission. LEDs emit light when asemiconductor is electried, and start to be used as small indicatorlights on instrument panels. Depending on the materials used for thepn junction, LEDs can produce not only narrowband light at differentwavelengths of ultraviolet, visible or infrared region, but also high in-tensity broadbandwhite light. Due to the capability of directional distri-bution, LEDs are good at spot lighting. All photon can be sent by LEDs inone directionwithout losing energy. According to different illuminationrequirements, LEDs can be assembled in different arrangements such asspot, line, and ring lights. Due to its benets mentioned above, LEDlights have started to become the illumination units of hyperspectralimaging systems in the application of food inspection (Park, Yoon etal., 2011). The disadvantages of LEDs include being sensitive to widevoltage uctuations and junction temperature, low light intensities ascompared to halogen lights, and producing grainy light when multipleLEDs are used in bulbs. Currently, the wavelength ranges of LEDs aremainly from ultraviolet to short-wave near infrared, while some LEDsemit light from long-wave near infrared to mid-infrared region. Onthe basis of the development of new materials and electronics, theLED technology is still ongoing and will become mainstream lightsources.

    4.1.3. LasersUnlike tungsten halogen lamps and white LEDs which generate

    broadband light, lasers are directional monochromatic light sourceswidely used as excitation sources in the application of uorescenceand Raman measurements. Lasers generate light on the way of stim-ulated emission. There are three basic components of a laser, namelya resonant optical cavity/optical resonator, a laser gain medium/active laser medium, and a pump source to excite the particles inthe gain medium. Monochromaticity, directionality, and coherenceare three unique properties of lasers. When a food are excited by amonochromatic beam of light with a high energy, the electrons inmolecules of certain compounds of the food will be excited to emitlight of a lower energy in a broadwavelength range, resulting in uo-rescence emission or Raman scattering. Both uorescence imagingand Raman imaging are sensitive optical techniques that carriescomposition information at pixel-level and can detect subtle changesof food quality. Recently, lasers have been utilized as excitationsources in the applications of hyperspectral uorescence imaging(Cho et al., 2009) and Raman imaging (Qin, Chao, & Kim, 2011) forquality inspection of food. Moreover, because of their ability to pro-duce narrowband pulsed light, LEDs are now also used as excitationsources of uorescence measurement for quality inspection of food(Yang et al., 2012), although light generated from lasers have higherintensities and narrower bandwidths than that from LEDs.

    4.1.4. Tunable light sourcesIn many current hyperspectral imaging systems for quality and

    safety inspection of food, the wavelength dispersion device is placedbetween the detector and the sample to disperse light into differentwavelengths after interaction with the sample. There is anotherequivalent approach that combines the broadband illumination andthe wavelength dispersion device together, which is called tunable

    light source. Tunable light sources allowdirectly area scanning to obtainboth spatial and spectral information of sample by setting the wave-length dispersion device in the illumination light path instead of the im-aging light path. Because only narrowband light is incident on the objectat a time, the intensity of the tunable light sources is relatively weak,which can reduce high irradiance and heat damage of sample. Current-ly, tunable light sources have been used for inspecting historical docu-ments which require weak illumination for the sample protection(Klein, Aalderink, Padoan, de Bruin, & Steemers, 2008). In addition, tun-able light sources aremainly used for area scanning and are not efcientfor point and line scanning. Therefore, tunable light sources are practi-cally not suitable for conveyor belt systems.

    4.2. Wavelength dispersion devices

    Wavelength dispersion devices are important for the hyperspectralimaging systems using broadband illuminating light sources. Theyhave the function of dispersing broadband light into different wave-lengths. Typical examples include lter wheels, imaging spectrographs,acousto-optic tunable lters, liquid crystal tunable lters, Fourier trans-form imaging spectrometers, and single shot imagers.

    4.2.1. Filter wheelsA lter wheel carrying a set of discrete bandpass lters is the most

    basic and simple device for wavelength dispersion. The bandpass lterstransmit the light at a particular wavelength efciently while eliminat-ing light at other wavelengths. There is a broad range of lters from ul-traviolet, visible to near infraredwavelengthwith various specicationscommercially available to satisfy different demands. Limitations of lterwheels include mechanical vibration from moving parts, slow wave-length switching, and image unmatched due to the lter movement.

    4.2.2. Imaging spectrographsAn imaging spectrograph, which generally operates in line-scanning

    mode, has the capability for dispersing incident broadband light intodifferent wavelengths instantaneously and generating a spectrum foreach point on the scanned linewithout the use of moving parts. Diffrac-tion gratings are generally used in imaging spectrographs for wave-length dispersion. A diffraction grating is a collection of equally spacedreecting or transmitting elements separated from one another bya distance that is on the order of the wavelength of the light beingstudied. Upon diffraction, an electromagnetic wave incident on agrating will have its electric eld amplitude, or phase, or both, mod-ied in a predictable manner (Palmer, 2005). There are two mainforms of imaging spectrographs, namely reection gratings (i.e., agrating superimposed on a reective surface) and transmissiongratings (i.e., a grating superimposed on a transparent surface). Inthe imaging spectrograph utilizing a transmission grating, a prismgratingprism (PGP) is commonly used. After entranced throughthe entrance slit of the spectrograph, the incoming beam is collimat-ed by the front lens and is then dispersed at the PGP component intodifferent wavelengths in a transmission way. At last, the dispersedlight is projected onto an area detector through the back lens to gen-erate a two-dimensional matrix, where one dimension stands for acontinuous spectrum and the other spatial information. Transmis-sion gratings are nearly independent of polarization and can be eas-ily mounted to a lens and an area detector to form a pushbroomhyperspectral imaging camera. However, transmission gratings arelimited by the properties of the grating substrate (or resin), and can-not operate at higher angles of diffraction as the reection gratings.As another main form of imaging spectrographs, a typical reectiongrating generally includes an entrance slit, two concentric sphericalmirrors, an aberration-corrected convex reection grating, and a de-tector. After entranced through the entrance slit, the incoming lightis reected by one of the mirror to the reection grating, which hasthe function of dispersing the incident beam so that the direction of

    the light propagation depends on its wavelength. The dispersed light

  • 6 D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114is then reected by the othermirror to the detector, where a continuousspectrum is received at different pixels. It is believed that this congu-ration offers several advantages, such as high image quality, free ofhigher-order aberrations, low distortion, low f-number, and large eldsize (Bannon & Thomas, 2005). The polarization effects of reectivespectrograph depend on the conguration and are generally less than50%. In addition, the efciencies of the reective optical components(e.g., mirrors) are generally higher than those of the transmission com-ponents (e.g., prisms). Therefore, the imaging spectrographs with re-ection grating can provide high signal-to-noise ratio (SNR) and idealfor the low light measuring conditions such as uorescence imagingand Raman imaging. A main disadvantage of reection gratings is thatthey are needed to use costly ways to correct inherently induced distor-tions, while transmission gratings use on-axis optics having automati-cally less aberrations.

    4.2.3. Tunable ltersAcousto-optic tunable lter (AOTF) and liquid crystal tunable lter

    (LCTF) are both electronically tunable bandpass lters. By usingacousto-optic interactions in a crystal, the AOTF can isolate light at a sin-gle wavelength from a broadband source through an applied acousticeld. A liquid crystal tunable lter (LCTF) has electronically controlledliquid crystal cells inserted between two parallel polarizers to transmitlight with a specic wavelength while light energy out of the passbandis rejected. Similar to a bandpass lter, tunable lters only disperse lightat one particular wavelength at a time. Unlike the xed interference l-ters, the electronically tunable lters like AOTFs and LCTFs can be exi-bly controlled for different wavelengths by varying the frequency of theradio frequencies using a computer. Tunablelters havemoderate spec-tral resolution (about 520 nm) and broad wavelength range (about4002500 nm). In addition, because tunable lters have no movingparts, they have no problem of speed limitation, mechanical vibration,and image misregistration, which are constraints of the rotating lterwheels. In comparison to AOTFs, LCTFs take much long response timeto switch from one wavelength to another (milliseconds versus micro-seconds), but have better image quality. In addition, AOTFs requiremore stringent optical design than LCTFs. The shortcomings of tunablelter include high F-number that leads to small light collection angleand low light collection efciency, needs of linearly polarized incidentlight that can cause 50% light loss, and longer exposure time than imag-ing spectrographs in similar illumination conditions. In the research offood quality and safety inspection, LCTF-based hyperspectral imagingsystems have been used for detecting sour skin-infected onions(Wang, Li, Tollner, Gitaitis, & Rains, 2012), prediction of apple rmness(Peng & Lu, 2006), and classication of wheat (Choudhary, Mahesh,Paliwal, & Jayas, 2009). AOTFs have also started to be used in food anal-ysis (Park, Lee et al., 2011).

    4.2.4. Fourier transform imaging spectrometersFourier transform imaging spectrometers employ an interferom-

    eter to self-interfere a broadband light, resulting in an interferogramthat contains its spectral data. The generated interferogram is thencalculated by an inverse Fourier transform to resolve the constitu-tion of the frequencies (or wavelengths) of the broadband light.Michelson and Sagnac are two main interferometer designs for thecurrent Fourier transform imaging spectrometers. Both designshave a beamsplitter and two at mirrors. The difference of two de-signs is that one mirror and the beamsplitter are xed in Michelsoninterferometer, while the other mirror moves to introduce opticalpath difference for generating interferogram. In Sagnac interferometer,two mirrors are xed and the beamsplitter can be slightly rotated tocreate the interference fringes. In addition, twomirrors inMichelson in-terferometer are perpendicular to each other, while in the Sagnac spec-trometer the two mirrors are not perpendicular but have a xed angle(b90) between them. Because of no moving components, the Sagnac

    spectrometer has good mechanical stability and compactness, butrelatively low resolution. On the contrary, the moving mirror in theMichelson spectrometer increases its sensitivity to vibrations. More-over, the Sagnac spectrometer is similar to a dispersive spectrometerwhere only one spatial dimension is collected in one scan and the spec-tra are acquired in a single line in the perpendicular direction. A eldscanning mirror or a moving platform is commonly used in Sagnacspectrometers to acquire the second spatial dimension. The differencebetween a dispersive spectrometer and a Sagnac spectrometer is thatthe former measures the spectra at different wavelengths directly,while the latter needs an additional step of taking a Fourier transform.On the other hand, theMichelson spectrometer has a pixel-based inter-ferogram to allow imaging in two dimensions. However, in a Michelsonspectrometer, a time interval is required to shift the moving mirror.Therefore, it takes a long time to collect the interferogram for a nespectral resolution and a high SNR. Although Fourier transform imagingspectrometers are now mainly used in bioanalytical chemistry andmedicine, they are considered to have considerable potential impactin food science, due to its benets of providing high spectral resolution,wide wavelength range, high optical throughput, and a spatial resolu-tion down to a few micrometers.

    4.2.5. Single shot imagersEither spatial scanning methods (e.g. whiskbroom and pushbroom)

    or spectral scanning methods (e g. staring imaging) cannot acquirehyperspectral images for fast-moving samples. On the contrary, singleshot imagers can collect multiplexed spatial and spectral data simul-taneously, making it possible to acquire a hypercube at video framerates. Although single shot imagers are still in the early stage, thereare already several systems available, such as the miniature staringHPATM imager (Bodkin, 2010), the image mapping spectroscopy en-doscope (Kester, Bedard, Gao, & Tkaczyk, 2011), and the image map-ping spectrometer systems (Tkaczyk, Kester, & Gao, 2011). There is atrade-off between temporal and spectral resolution of current singleshot imagers. The ner the spectral resolution is, the lower the tem-poral resolution is, and vice versa. With the feature of capturinghyperspectral images on the millisecond time scale, single shot de-vices are especially advanced in real-time and have a brilliant futurefor food quality and safety inspection.

    4.3. Area detectors

    Area detectors have the function of quantifying the intensity ofthe acquired light by converting incident photons into electrons.CCD (charge-coupled device) and CMOS (complementary metal-oxide-semiconductor) cameras are two major types of solid statearea detectors. Photodiodes made of light sensitive materials arethe basic unit of both CCD and CMOS to convert radiation energy toelectrical signal. Silicon (Si), indium gallium arsenide (InGaAs), andmercury cadmium tellurium (MCT or HgCdTe) are three commonlyusedmaterials for hyperspectral imaging. The silicon is used for acquir-ing the spectral information in the ultraviolet, visible and shortwavenear infrared regions. Because of the advantages of small size, highspeed, low noise and good spectral response, the silicon-based CCDcameras have been widely used for inspection of food quality (Park,Kise, Windham, Lawrence, & Yoon, 2008; Yoon, Lawrence, Smith, Park,&Windham, 2008).With its advantages of fairly at and high quantumefciency in the near infrared region, InGaAsmade of an alloy of indiumarsenide (InAs) and gallium arsenide (GaAs) is commonly used fordetecting the spectra at 0.91.7 m (ElMasry, Sun, & Allen, 2012; Wu,Sun, &He, 2012). The detection range of InGaAs can be further extendedto 2.6 m by adjusting the percentages of InAs and GaAs. However,InGaAs photodiodes have higher costs than silicon-based photodiodes.For the detection of spectra at mid-infrared region, MCT is the materialof choicewith the features of large spectral range and high quantum ef-ciency. MCT is an alloy of CdTe and HgTe and is considered as the third

    most well-regarded semiconductor after silicon and gallium arsenide.

  • detectors is that both photodetector and readout amplier in eachpixel are included within the CMOS image sensor (Litwiller, 2005).After incident photon is converted to electron by photodiodes, a volt-age signal is converted from the integrated charges by using opticallyinsensitive transistors adjacent to the photodiode accordingly inCMOS, and then readout over the wires. Because the wires used inthe CMOS can transfer signal very fast, CMOS camera is especiallysuitable for the requirement of high-speed imaging for online indus-trial inspection. In CCD technology, blooming occurs when thecharge in a pixel exceeds the saturation level and starts to ll adja-cent pixels. However, because of the structure of including both pho-todetector and readout amplier in one pixel, each pixel in CMOSarray is independent of other pixels nearby, resulting in being im-mune to the blooming. Moreover, owing to such structure, CMOShas the random addressability to each particular pixel by an XY ad-

    idate the acceptability and reliability of the extracted spectral andspatial data, determine whether the hyperspectral imaging system

    7D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114The detection ranges of MCT include mid-infrared region (about 2.5 to25 m) and near infrared region (about 0.82.5 m). TheMCT photodi-odes are commonly used in hyperspectral imaging systems for the re-quirement of acquiring spectra in long-wave near infrared region(about 1.72.5 m) for the food quality inspection (Manley, Williams,Nilsson, & Geladi, 2009; Vermeulen, Pierna, van Egmond, Dardenne, &Baeten, 2012). Othermaterials including lead selenide (PbSe) operatingat wavelengths between 1.5 and 5.2 m, lead sulde (PbS) between1 and 3.2 m, indium antimonide (InSb) between 1 and 6.7 m, plat-inum silicide (PtSi) between 1 and 5 m, germanium (Ge) between0.8 and 1.7 m, and deuterated, L-alanine doped triglycine sulfate(DLaTGS) between 0.8 and 25 m. The converted electrical signalsare then digitalized to generate the hypercubes using an analog todigital (A/D) converter.

    4.3.1. CCD detectorBoth CCD and CMOS cameras consist of millions of photodiodes

    (known as pixels) tightly arranged in rows forming an array. Duringthe measurement of the array, the electric charges accumulated inphotodiodes must be moved out of the array to a place where thequantity of charges can be measured. There are generally four de-signs of CCD architectures for measuring two-dimensional region,namely full frame, frame transfer, interline transfer, and frame inter-line transfer. The full frame is the simplest design of CCD architec-ture, in which the accumulated charges move row by row into ahorizontal shift register. The image measurement using the fullframe is relatively slow, because each line is exported one by one(known as a progressive scan) controlled by a mechanical shutterto avoid interference of newly generated line. Different from fullframe design, the interline design has additional vertical shift regis-ters that are adjacent to the corresponding photodiode and coveredby opaque materials to shield the incident light. The function of thevertical shift registers is to collect and pass the charges from eachphotodiode. The horizontal shift register then readouts the collectedcharges row by row. Therefore, the signal accumulation (exposure)and readout can be done simultaneously in the interline design.However, the vertical shift registers are opaque and therefore de-crease the open ratio of the light sensitive area. Recently, an im-provement has been made by using on-chip lenses, resulting in anincrement of over 70% of the overall quantum efciency. The frametransfer design is an extensive version of full frame design by addinga new storage frame next to the integration frame that is consisted ofphotodiodes. The storage frame has the same size of the integrationframe and is covered by an opaque mask. After acquired in the inte-gration frame, the charges of the whole frame are shifted into thestorage frame rapidly. When the charges in the storage frame aretransferred into the horizontal shift register, a new image can be cap-tured in the integration frame. Compared with full frame design, theframe transfer design has a faster frame rate, but also has a larger sizeand more complex control electronics. The frame interline transferdesign combines the principles of both frame transfer and interlinetransfer to give a further accelerated speed of image acquirement.Charges accumulated in photodiodes are transferred to the verticalshift registers, and then shifted to the storage frame as a whole.However, the frame interline transfer design has the advantages ofboth frame transfer and interline transfer, which are a low light ef-ciency and a high cost for the doubled frame area. In the inspection offood quality and safety, full frame (Devaux et al., 2006), frame trans-fer (Mendoza, Lu, Ariana, Cen, & Bailey, 2011; Yoon, Park, Lawrence,Windham, & Heitschmidt, 2011), and interline transfer (Singh, Jayas,Paliwal, &White, 2010a, 2010b) have all been considered tomeet therequirements of different applications.

    4.3.2. CMOS detectorCMOS image sensor is considered to have the potential to com-pete against CCD. The main differences between these two types ofis in running condition, evaluate accuracy and reproducibility ofthe acquired data under different operating conditions, and diagnoseinstrumental errors if necessary. The major types of calibration in-clude wavelength calibration, spatial calibration, and curvaturecalibration.

    Hyperspectral imaging systems with imaging spectrographs dis-perse indecent light into different wavelengths, which are thencharged at different pixels along the spectral dimension on the de-tector. The wavelength of each pixel is unknown. The wavelengthsof the dispersed light on pixels might also change under different

    Table 1Main differences among imaging, spectroscopy, and hyperspectral imaging techniques.

    Features Hyperspectralimaging

    Spectroscopy Imaging Multispectralimaging

    Spectral information LimitedSpatial information Multi-constituentinformation

    Limited

    Detectability to objectswith small size

    Flexibility of spectralextraction

    Generation ofquality-attributedistribution

    Limiteddress. Besides high-speed imaging and random addressability, thereare many other advantages for CMOS image sensors, such as smallsize, low cost, single power supply, and low power consumption,which make them competitive in the consumer electronics market.CMOS has been used in hyperspectral imaging systems for food qual-ity inspection (Qiao, Ngadi, Wang, Gariepy, & Prasher, 2007; Qiao,Wang et al., 2007). The constraint of CMOS cameras is the highernoise and dark current than the CCDs because of the on-chip circuitsused to transfer and amplify signals, and as a result of lower dynamicrange and sensitivity than CCDs.

    4.4. Calibration of hyperspectral imaging system

    Appropriate calibrations for a hyperspectral imaging system areessential to ensure the reliability of the acquired hyperspectralimage data and to guarantee the consistent performance of the sys-tem. Even if the environment of data measurement is carefully con-trolled, inconsistent spectral proles of reference spectra may beacquired by some systems. Therefore, it is necessary to eliminatethis variability by using a standardized and objective calibration,and a validation protocol. The goals of calibration process are to stan-dardize the spectral and spatial axes of the hyperspectral image, val-

  • 8 D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114operating conditions, resulting in inuencing the accuracy and re-producibility of image acquisition. Therefore, wavelength calibrationis needed to identify each pixel along the spectral dimension with aspecic wavelength. The form of data from hyperspectral images ispixel intensity versus pixel index, and will be intensity versus wave-length after wavelength calibration. The hyperspectral imaging sys-tems using xed or tunable lters do not need wavelengthcalibration as the wavelength of each lter is identied. The wave-length calibration commonly uses wavelength calibration lamps toidentify each wavelength as a function of its pixel index. The wave-length calibration lamps produce narrow, constant, intense, stable,and specic lines from the excitation of various rare gases andmetal vapors. Various wavelength calibration lamps cover differentwavelength ranges from ultraviolet to infrared for calibrating differ-ent systems. Typical wavelength calibration lamps include pencilstyle lamps, battery powered lamps, and high power lamps usingArgon (Ar), Krypton (Kr), Mercury (Hg), Mercury/Argon (Hg/Ar),Neon (Ne), Xenon (Xe), etc. In the wavelength calibration process,the lamp is rst scanned by the hyperspectral imaging system andthe spectral prole is extracted along the spectral dimension of theimage. The spectral peaks with known wavelengths and their corre-sponding pixel indices along the spectral dimension are then identi-ed. A quantitative regression equation is established between thewavelength and the pixel indices. Linear, quadratic, cubic, and trigo-nometric equations are commonly used. As a result, the wavelengthsof all pixels along the spectral dimension are identied using theresulting regression.

    Spatial calibration for hyperspectral imaging systems has the func-tion of determining the dimension and resolution of the eld of view.Spatial calibration approaches are different for the hyperspectral im-aging systems with different image acquisition modes. As the areascanning acquires a series of images with the same dimension at dif-ferent spectral bands, the spatial calibration is conducted on a select-ed image with high SNR using resolution test charts such as ISO12233 Test Chart, NBS 1952 Resolution Test Chart, and 1951 USAFresolution test chart. The line scanning hyperspectral imaging sys-tems might have different resolutions for the two spatial dimensions,because the pixels along the y direction of the hyperspectral cube areacquired using the imaging spectrograph and the pixels along the xdirection are acquired by the stepwise movement of the sample.The resolution of x direction is the step size of the movement perpixel and the range of the x direction depends on the distance ofthe movement. The calibration for y direction is conducted by scan-ning a target printed with thin parallel lines. The resolution of y direc-tion determined by dividing the distance of a range on the target bythe number of pixels of the range within the scanned image. Therange of the y direction is calculated by multiplying the resolutionby the number of pixels on the spatial dimension of the detector.

    Curvature calibration is intended to correct the reection effect oflight on the food with spherical geometry, so that the spectrum at anypixel is independent of its location. Gomez-Sanchis et al.(2008) pro-posed a curvature calibration of mandarin, where the amount of lightreected is corrected according to the angle between the incidentlight and the normal to the surface direction.

    xy D cos 1D

    1

    where () is the corrected spectrum at a point (x,y) at wavelength ;D is the ratio between the direct light and total average lights, andcos() modulates the amount of direct light reected at each pixel.The D and the angle of incidence are different for each of the pixelswithin the image of sample, and therefore should be determined ac-cordingly. For this purpose Gomez-Sanchis et al. (2008) developed adigital elevation model (DEM) to obtain the geometric parameters of

    the fruit. The results showed the proposed calibration was effectivefor minimizing the adverse side effects produced by the curvature ofthe fruit. In another study, Gowen et al. (2008) found thatmultiplicativescatter correction (MSC) was efcient to decrease spectral variability ofmushrooms due to curvature.

    5. Hyperspectral image processing methods

    Because the data volume of a hyperspectral image is usually verylarge and suffers from collinearity problems, chemometric algorithmsare required for mining detailed important information. Typical stepsof a full algorithm for analyzing hyperspectral image are outlined in aowchart illustrated in Fig. 3. Commercially available software toolsfor hyperspectral image process aremainly Environment for VisualizingImages (ENVI) software (Research Systems Inc., Boulder, CO, USA),MATLAB (The Math-Works Inc., Natick, MA, USA), and Unscrambler(CAMO PROCESS AS, Oslo, Norway). ENVI is a popular software tooldesigned to process, analyze, and display hyperspectral images. A vari-ety of popular image processing algorithms are bundled in ENVI usingautomated, wizard-based approaches or automated workows to pro-vide step-by-step processes and instructions to help users for imageprocessing quickly and easily.MATLAB is a high-level technical comput-ing language and interactive environment that has the capability of de-veloping algorithms, creating models, analyzing data, visualizingimages for processing and analyzing hyperspectral image data. As afourth-generation programming language, MATLAB enables users toanalyze hyperspectral images more exibly than ENVI. In addition,with tools and built-in math functions, MATLAB enables users to ex-plore multiple approaches of data analysis faster than other traditionalprogramming languages, such as C, C++, Fortran, and Java. Unscram-bler is a famous chemometric tool for multivariate data analysis. Al-though it cannot be directly used for the analysis of hyperspectralimage data, Unscrambler has been widely used for the data miningand calibration of spectral data.

    5.1. Reectance calibration of hyperspectral images

    The raw spectral image collected using hyperspectral imaging is ac-tually detector signal intensity. Therefore, a reectance calibrationshould be performed to calibrate the raw intensity image into reec-tance or absorbance image with black and white reference images. Inorder to remove the effect of dark current of the camera sensor, theblack image (B, about 0% reectance) is acquired when the light sourceis completely turned off and the camera lens is completely coveredwithits non-reective opaque cap. The white reference image (W) isobtained under the same condition as the raw image using a white sur-face board which has a uniform, stable and high reectance standard(about 99.9% reectance). These two reference images are then usedto correct the raw hyperspectral images by using the following equa-tion:

    R ISIDIWID

    100 2

    where R is the corrected hyperspectral image in a unit of relative reec-tance (%); IS the raw hyperspectral image; ID the dark image, and IW thewhite reference image.

    5.2. Image enhancement and spectral preprocessing

    Image enhancement is an important process for improving the qual-ities of image. Some of image enhancement techniques are intended tomake specied image characteristics more obvious, such as edge andcontrast enhancement, magnifying, pseudo-coloring, and sharpening.Others are used to reduce the noise, such as convolution and spatial l-tering, Fourier transform (FT), andwavelet transforms (WT). FT andWT

    are also suitable for edge detection. Moreover, image enhancement

  • eps

    9D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114techniques can also be grouped into spatial domain methods (such asthe histogram equalization method and the local neighborhood opera-tions based on convolution) and frequency domain methods (such as

    Fig. 3. Flowchart of a series of typical stdiscrete Fourier transform and wavelet transforms).Spectral preprocessing algorithms are mainly used to improve the

    spectral data extracted from hyperspectral images mathematically.The goal of spectral preprocessing is to correct effects from randomnoise, length variation of light path, and light scattering, resulting inproducing a robust model with the best predicting ability. The mostwidely used preprocessing algorithms include smoothing, derivatives,standard normal variate (SNV),MSC, FT,WT, and orthogonal signal cor-rection (OSC). Smoothing (e.g. moving average, SavitzkyGolay, medi-an lter, and Gaussian lter) is used to reduce noise from the spectraldata without reducing the number of spectral variables. Derivatives(mainly rst and second derivatives) is accomplished in correctingbaseline effects in spectra. The 2nd derivative also has the function ofresolving nearby peaks, and sharpening spectral features. MSC is atransformation method used to compensate for additive and/or multi-plicative effects in spectral data. SNV is a row-oriented transformationwhich centers and scales individual spectra. Both MSC and SNV arecompetent to reduce the spectral variability due to scatter and baselineshifts. FT and WT separate noise from the spectra in the frequency do-main. OSC lters the uninformative part for quality vector Y from thespectral matrix X based on constrained principal component analysis(PCA) or partial least square regression (PLSR). Pre-processing shouldonly be used when it really helps to improve the model performance.

    5.3. Image segmentation

    The objective of image segmentation is to divide an image intoisolated objects or regions and locate the region of interests (ROIs)in a form of masks for further spectral and textural feature extraction(ElMasry, Wang, & Vigneault, 2009). Manual segmentation canobtain accurate sectioned mask if the process is carefully exe-cuted manually, but the process is time-consuming, tedious, andsubjective, and therefore this method is not suitable to be extensive-ly applied in industry application. Image segmentation algorithmsare more efcient than manual segmentation. The most used seg-

    for analyzing hyperspectral image data.mentation algorithms are thresholding (like global thresholdingand adaptive thresholding), morphological processing (like erosion,dilation, open, close, and watershed algorithms), edge-based seg-mentation (like gradient-based methods and Laplacian-basedmethods), and spectral image segmentation.

    Thresholding is a widely used image segmentationmethod due toits simplicity of implementation. Images containing the object withuniform graylevel and a background of unequal but also uniformgraylevel would be appropriate for using thresholding. Generallythere are two kinds of thresholding algorithms, global thresholdingand adaptive thresholding. The rst approach is the simplestthresholding technique and commonly implemented when thegray histogram is bimodal. When the graylevels of the ROI and thebackground and corresponding contrast are not constant within animage, an adaptive threshold will be competent, where a differentthreshold is used for different regions in the image. Morphologicalprocessing is exible and powerful for image segmentation. Neigh-borhood operations are typical binary morphological operations bysliding a structure element containing any complement of 0 s and 1 swith any size over the image. Erosion and dilation are two elementaryoperations to morphological processing from which all other morpho-logical operations are based. Erosion is the process of removing pixelson object boundaries in an image, while dilation is the process of addingpixels to the boundaries of objects. Edge-based segmentation is com-monly used when pixels on edges/boundaries of objects have dramaticand discontinue graylevel changes. Gradient-based methods detect theedge pixels by searching themaximum in the rst derivative within theimage, while Laplacian-basedmethods locate edge pixels by looking forzero-crossings in the second derivative within the image. Spectralimage segmentation is considered as a higher-level analysis comparedto traditional segmentations that are regarded as low-level operations.Traditional segmentations operate on a monochrome image that has a

  • 10 D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114scalar graylevel value of each pixel, while spectral image segmentationis a process of vector mining, because each pixel within hyperspectralimage is a vector of intensity values. Spectral image segmentation inte-grates segmentation and classication into a single process. This ap-proach has been used with success in food analysis (ElMasry, Iqbal,Sun, Allen, & Ward, 2011).

    5.4. Object measurement

    For quantitativemeasurement of ROI within hyperspectral image,graylevel object measures are required to obtain a function of the in-tensity distribution of ROI extracted by image segmentation. Thereare two main categories of graylevel object measurements, namelyintensity-based measure and texture-based measure (Ngadi & Liu,2010). Mean is the most widely used rst-order measure for acquir-ing intensity information (ElMasry, Wang, ElSayed, & Ngadi, 2007;Qiao, Wang et al., 2007), which is calculated by averaging the inten-sity of pixels within the ROI at each wavelength. Besides mean, therst-order measures also include standard deviation, skew, energy,and entropy. Texture is a typical example for the second-order mea-sures that are based on joint distribution functions. It represents thespatial arrangement of the pixel graylevels within ROI (IEEE Standard601.4-1990, 1990). The graylevel co-occurrence matrix (GLCM) pro-vides a number of second-order statistics used to describe the graylevelrelationships within a neighborhood around a pixel of interest, and hasbeen used in many hyperspectral imaging applications (ElMasry et al.,2007; Qiao, Ngadi et al., 2007; Qin, Burks, Ritenour, & Bonn, 2009).The 2-D Gabor lter is another popular method for image texture ex-traction and analysis. It has the capability to achieve certain optimaljoint localization properties in both spatial domain and spatial frequen-cy domain (Daugman, 1980, 1985).

    5.5. Multivariate analysis

    As discussed previously, hyperspectral imaging contains hugeamount of data that are commonly extracted as intensity-based,texture-based, and morphological-based features. Multivariate analysisis required to efciently decompose massive quantity of features intouseful information and establish simple and easier understandable rela-tionship between hyperspectral imaging data and the desired attributesof tested samples.Multivariate analysis can be classied into qualitativeclassication and quantitative regression.

    Qualitative classication (also called pattern recognition) includesunsupervised classication and supervised classication. Unsupervisedclassication is achieved according to the nature characteristics that canbe correlation, distance, or combination of them,without a prior knowl-edge about the class information of the data. Typical unsupervisedmul-tivariate classication algorithms for the analysis of hyperspectral datainclude PCA, k-means clustering, and hierarchical clustering. PCA de-composes the spectral data into several principal components (PCs) tocharacterize the most important directions of variability in the high di-mensional data space. The similar spectral signatures among samplesand their class information can be evaluated by the rst several PCsresulting fromPCA. K-means clustering classies samples into k clustersinwhich each sample belongs to the clusterwith theminimumdistanceto the cluster centroid. Hierarchical clustering is intended to build ahierarchy of clusters that is usually presented in a dendrogram. Thereare two types of hierarchical clustering: agglomerative and divisive.The hierarchical clustering is achieved by the use of a measure of dis-tance between pairs of samples. The hierarchical clustering is not ef-cient for large data sets.

    Supervised classication is different from unsupervised classicationby grouping new samples into predened known classes according totheir measured features. Typical supervised multivariate classicationalgorithms for the analysis of hyperspectral data include linear dis-

    crimination analysis (LDA), partial least square discriminant analysis(PLS-DA), articial neural networks (ANN), support vector machines(SVM), and k-nearest neighbor (kNN). LDA nds an optimal linear pro-jection of independent variables to classify the samples into separateclasses. This method reaches maximal separation by maximizing theratio of between-groups to within-groups variability. The prime differ-ence between LDA and PCA is that LDA includes class information ofsamples, while PCA only considers independent variables. PLS-DA is onthe basis of PLSR approach for the optimum separation of classes byencoding dependent variable of PLSR with dummy variables describingthe classes (Wu, Feng, He, & Bao, 2008). PLS-DA is then implemented inthe usual way of PLSR. kNN is a non-parametric approach to group ob-jects based on closest neighbor samples within the feature space. As alearning algorithm of instance-based, kNN is perhaps the simplest ofall machine learning algorithms: an object is assigned to the class by amajority vote of its neighbors.

    In the application of spectral analysis, the goal of multivariate regres-sion is to establish a relationship between the spectrum response of thetested sample and its target features with explanatory or predictive pur-poses. Multivariate regression can be linear or non-linear. Multivariatelinear regressionmethods in quantitative analysis of spectral data mainlyinclude multiple linear regression (MLR) (Wu et al., in press), principlecomponent regression (PCR), and PLSR (Antonucci et al., 2011; Sinija &Mishra, 2011). MLR establishes a relationship between spectrum andthe desired attributes of tested sample in the form of a linear equationwith features of being simple and easy to be interpreted. The regressioncoefcients of this equation are determined by the process of calculatingthe minimum error between reference and predicted values in a leastsquares sense. MLR fails when the number of variables is more thanthat of samples and is easy to be affected by the collinearity betweenthe variables. In the case of analyzing hyperspectral cubes, the effectivevariable selection or dimensionality reduction is required before MLRmodel establishment. PCR is a regression method consisted of PCA andMLR. First, a PCA is carried out on spectral data. Instead of the original var-iables, the PCs are then used as independent variables in aMLR on the de-pendent variables. The advantages of PCR against MLR is that the PCcalculation makes the independent variables uncorrelated and with lessnoise. The constraints of PCR is that its PC calculation does not considerthe reference values of dependent variables, therefore the obtained PCsmay be not informative for the dependent variables. Different from PCR,PLSR decomposes both the spectral (independent variables) and concen-tration (dependent variables) information simultaneously, resulting inextracting a set of orthogonal factors called latent variables (LVs). In thedecompositionprocess, dependent variables are actively considered in es-timating the LVs to ensure that the rst several LVs are most related forpredicting dependent variables. The building of the relationship betweenindependent variables and dependent variables then becomes a simpletask as to nd out the optimal number of LVs which have the best predic-tive power in the relationship to dependent variables.

    Sometimes, the relationship between the spectra of the tested sam-ple and its quality may have the characteristics of non-linear, and thesolutions of such analysis become better by using the non-linear regres-sion techniques such as ANN (Lorente, Aleixos, Gmez-Sanchis, Cubero,& Blasco, 2013) and support vector regression (SVR) (Chen, Wu, He, &Liu, 2011; Liu, Gao, Hao, Sun, & Ouyang, 2012; Wei, Xu, Wu, & He, inpress). ANN simulates the behavior of biological neural networks forlearning and prediction purpose. The multilayer feed forward neuralnetwork is themost widely used ANN technique which has three layersof input, hidden, and output to arrange the articial neurons. Neuronsare simple computational elements that process information using aconnectionist approach to computation. Neurons are linked by weight-ed connections that are adjusted based on input and output informationduring the learning phase. The spectral features are introduced to theinput layer with the results of the predicted values exported from theoutput layer. SVR is another powerful supervised learningmethodologybased on the statistical learning theory. The structural riskminimization

    principle (SRM) is embodied instead of traditional empirical risk

  • 11D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114minimization principle (ERM),which is employed by conventional neu-ral network to avoid overtting and multidimensional problem. Espe-cially, LS-SVM, an optimized version of the standard SVM, iscommonly used for spectral analysis. It employs nonlinearmap functionandmaps the input features to a high dimensional space, thus to changethe optimal problem into equality constraint condition. Lagrangemulti-plier is utilized to calculate the partial differentiation of each feature toobtain the optimal solution. ANN and SVM can be applied in both clas-sication and regression tasks.

    5.6. Optimal wavelength selection

    Hyperspectral imaging provides more spectral data related to foodquality than multispectral images, as the numbers of wavelengths ofhyperspectral images are much larger than those of multispectral im-ages. Inmost cases, the inclusion ofmostwavelengths does not increasethe model performance, since somewavelengths mainly include irrele-vant information while others have low SNR. The elimination of irrele-vant variables can predigest calibration modeling and improve theresults in terms of accuracy and robustness (Wu et al., 2009; Wu, He,& Feng, 2008). Besides, there is a problem of multicollinearity amongcontiguous variables (wavelengths). Multicollinearity (or collinearity)means that the correlations among the independent variables (wave-lengths) are strong. These variables have similar spectral information.The presence of a high degree of collinearity between variables in amodel will tend to inuence the matrix towards singularity, and thisin turn will have a large inuence on the coefcients generated (Zou,Zhao, Povey, Holmes, & Mao, 2010). The selection of wavelengths canminimize the collinearity among contiguous wavelengths. Based onthe selected optimal wavelength, a reduced image cube can be generat-ed instead of thewhole hyperspectral cube, resulting in speeding up thesubsequent data processing and improving prediction results in termsof accuracy and robustness. Moreover, wavelength selection is also animportant step in the applications of detecting the properties of interest.The selected wavelengths are used as a reference to convert the hyper-cube into virtual images withmaximal contrast for the properties of in-terest. Image processing techniques are then applied to these virtualimages for the detection of the properties of interest. In addition, if afew optimal wavelengths that have characteristic information are se-lected, a multispectral imaging system with the advantages of simplestructure and low cost can be established based on these selectedwave-lengths and will be incomparable for process monitoring and real-timeinspection. However, most current researches selected optimal wave-lengths respectively for each individual quality attribute of food prod-ucts. Different optimal wavelengths are selected for different qualityattributes accordingly. When one set of optimal wavelengths is usedto design a multispectral imaging system, only one quality attributecan be predicted and the multifunctionality of hyperspectral imagingis lost. Recently, Wu, Sun et al. (2012) proposed the selection of instru-mental effective wavelengths (IEW) and predictive effective wave-lengths (PEW) that are the optimal wavelengths of several qualityattributes. The multispectral imaging systems designed based on IEWhave the multifunctionality of determining several quality attributessimultaneously.

    The aim of wavelength selection methods is to select optimalwavelengths containing the important information related to qualityattributes, produce the smallest possible errors for qualitative dis-criminations or quantitative determinations. Knowledge based selec-tion is a manual approach made from the basic knowledge about thespectroscopic properties of the sample (Zou et al., 2010). There arealso mathematical selection algorithms for choosing optimum wave-lengths in a more efcient way.

    Some classical approaches include correlation coefcients,loading and regression coefcients, analysis of spectral differences(ASD), spectrum derivatives, and stepwise regression. Correlation

    coefcient approach selects the wavelengths have the highestcorrelation coefcient as feature wavebands. Loading and regres-sion coefcients reect the relation between a given response andall predictors (wavelengths). In general, wavelengths havinglarge values (irrespective of sign) are considered as optimal ones.The ASD analyzes the difference between spectra of samples of dif-ferent varieties. The wavelengths with large differences are impor-tant for the discrimination. The method of spectrum derivativescalculates the difference of the derivatives of spectra and selectsthe wavelengths that have large differences between samples ofdifferent varieties as the optimal wavelengths. Stepwise regressionnds the important wavelengths by adding one wavelength withforward addition and then testing it with backward elimination.

    Successive projections algorithm (SPA) and uninformative variableelimination (UVE) are two relatively, sophisticated methods. UVE isbased on the stability analysis of the PLSR regression coefcient. Thestability of a variable is calculated by dividing the mean of the regres-sion coefcients by standard deviation of the regression coefcients ofthe variable. SPA employs a simple projection operation in a vectorspace to select subsets of variables with minimum of collinearity. In ad-dition, UVE eliminates uninformative variables but its selected variablesmight have a problem of multicollinearity and SPA selects variableswith minimal multicollinearity but its selected variables might containvariables less related to the quality attribute. Therefore, a combinationof UVE-SPAwas proposed by Ye,Wang, andMin (2008) to complemen-tary advantages of both methods and has been applied to the spectralanalysis of food quality (Wu, Chen, Zhu, Guan, & Wu, 2011; Wu, Nie,He, & Bao, 2012).

    Elaborate search-based strategies include simulated annealing (SA)and genetic algorithms (GAs). SA is a probabilistic metaheuristic forthe global optimization problem inspired from annealing process inmetallurgy. In the application of wavelength selection, SA generates anumerical string containing the selected wavelengths. By analogywith the annealing process, SA attempts to replace the current solutionby a random solution in each step. The solution is iteratively modiedusing a criterion called Boltzman's probability distribution (Metropoliscriterion) that is subject to the increment of objective function and aglobal parameter T, that is analogous with temperature. T is graduallydecreased during the process. As the T is decreasing, solution is increas-ingly difcult to be modied. Finally, if T is lowered sufciently, no fur-ther changes in the solution space are possible. To avoid being frozen ata local optimum, the SA algorithm moves slowly through the solutionspace. This controlled improvement of the objective value is accom-plished by accepting non-improving moves with a certain probabilitythat decreases as the algorithm progresses (Chen & Lei, 2009). GA is asearch heuristic algorithm that mimics the process of Darwin's theoryof natural selection to research optimization. In the application ofwave-length selection, GA evolves a population of strings called chromosomesthat encode wavelengths. A tness function is used to evaluate theperformance of chromosomes. Similar to natural selection, the chro-mosomes with a high tness value have a higher probability toreproduce. The evolution process is repeated until the terminationcondition has been reached. The elaborate search-based strategiesare generally more efcient than exhaustive enumeration byresearching a large part of all possible subsets in a reasonable time,much less than the time of researching all possible subsets. However,a main drawback of elaborate search-based strategies is that their re-sults are unstable. Different optimal wavelengths might be selectedevery time, although their prediction abilities are sometime similar.In addition, SA and GA have many adjustable factors that affect theresults, and therefore require a considerable level of expertise forusers.

    Interval base algorithms include interval partial least squares (iPLS),windows PLS and backward interval partial least squares (biPLS). TheiPLS splits the spectra into several equal distant regions, and then estab-lishes PLS regression models for each sub-interval. The interval region

    with the lowest RMSECV is chosen as the optimal one. The siPLS

  • 12 D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114algorithm calculates all the possible PLS model combinations of severalintervals and chooses the best combination. In the biPLS algorithm, afterthe dataset is split into a given number of intervals, the PLSmodel is cal-culated with each interval left out, which is termed as backward. Theleft out interval is the one that when left out gives the poorestperforming model with respect to RMSECV. The optimal combinationis determined in accordance with the smallest RMSECV. The main ad-vantage of interval base algorithms is an interval display showing theperformances of interval models along the full range wavelength. How-ever, thewidth and number of intervals havemuch inuence on the se-lection result and calculation time. More combinations of intervals withsmall width consideredmight lead to better result, but can also increasethe calculation time.

    5.7. Model evaluation

    For the analysis of hyperspectral image data either on the purpose ofqualitative classication or quantitative regression, the multivariatedata must be trained to build a calibration model that should be evalu-ated for its validity by the validation process such as cross validation orby the prediction processwith a new set of samples. There are twomaincategories of cross validation: the segmented cross validation and fullcross validation. In segmented cross validation, the samples are appor-tioned into segments or subgroups. In its calculation procedure, onesegment is kept out of the calibration at a time. By repeating the selec-tion of different segments, the predictions can be made on all samplesfor the validation procedure. In leave-one-out cross validation, onlyone sample is preserved at a time and all other samples are used tobuild the calibration. The latter validation is also called leave-one-outcross validation and is in general the favorable one. Segmented crossvalidation is commonly used when full cross validation would be tootime consuming or when some samples are treated at the same condi-tion and should be included in one segment.

    Within the processes of calibration, validation, and prediction, theperformance of a calibrationmodel is usually evaluated in terms of stan-dard error of calibration (SEC), root mean square error of calibration(RMSEC) and coefcients of determination (r2) of calibration (rC2) inthe calibration process, root mean square error of cross-validation(RMSECV) and coefcients of determination of validation (rV2) in thevalidation process, and standard error of prediction (SEP), root meansquare error of prediction (RMSEP), coefcients of determination ofprediction (rP2), and residual predictive deviation (RPD). Generally, agood model should have higher values of rC2, rV2, rP2, and RPD, lowervalues of SEC, SEP, RMSEC, RMSECV and RMSEP, and a small differencebetween SEC and SEP.

    5.8. Visualization of quality images

    Recently, there is an increasing demand to know the detailed qualitydistribution of spatially non-homogeneous properties of interest in asample, rather than their average concentration. Hyperspectral imagingis a main technique to be extensively used in obtaining quality imagesthat have the advantages of knowing and understanding the heteroge-neity of food products. There are usually three ways to visualize thequality distribution of a sample. The rst way is to directly display thequality distribution using an image at an individual wavelength(Fig. 1.b). However, it provides limited information and cannot accu-rately present the spatial distribution, unless the certain quality attri-bute has a very good correlation to the intensity of this wavelength.The second way of visualizing the quality distribution is using a falsecolor image. There are three types of color photoreceptor cells (cones)for human to percept tristimulus values. A false color image is generatedby setting three monochrome images for red, green and blue channelsof RGB image respectively (Fig. 1.e). The monochrome images can bethe images at certain wavelengths or the images obtained by mathe-

    matical calculation such as PCA and WT. False color images arecommonly used for target detection and display purposes, and it ishard to be used to show quantitative distributions. Moreover, only lim-ited false color images could be obtained because only three mono-chrome images can be used. The third way is to calculate the qualityattribute of each pixel by applying chemometric tools with the spec-trum of corresponding pixel, which can be considered as a linear ornon-linear mathematical combination of images at different wave-lengths. Although it is practically impossible to measure the preciseconcentration of compositions for every pixel within a sample, a solu-tion to this problem is to establish the regression models based on theaverage spectrum of all pixels within the ROI where the correspondingreference value can be obtained. The ROI can be the whole sample or apart of sample from a selected location. The established models canthen be applied to determine the composition content of each pixelwithin the object region, which can be used for further generation ofquality images.

    6. Advantages and disadvantages of hyperspectral imaging

    Due to its advantages of distinguished characteristic identicationability and containing rich information, hyperspectral imaging is highlysuitable for food quality and safety analysis and assessment. On theother hand, as any other techniques, hyperspectral imaging also hassome demerits that need to be solved in future research. The major ad-vantages of applying hyperspectral imaging in food quality and safetyanalysis and assessment can be summarized as follows:

    Hyperspectral imaging is a chemical-free assessment method thatrequires minimal sample preparation. Therefore, it saves labor, time,reagent cost, and the cost of waste treatments compared with tradi-tional methods, as a result, it is economic.Like spectroscopy tech-nique, hyperspectral imaging is a non-invasive and non-destructivemethod that can be applied for both qualitative and quantitative anal-yses. Unlike spectroscopy, hyperspectral imaging records a completespectrum of every pixel within the scene. Therefore, hyperspectralimaging is able to delineate multiple distribution of different constit-uents within a sample, not just the bulk composition.Hyperspectralimaging provides an extremely simple and expeditious inspectionbased on the established and validated calibration model. It can deter-mine the contents and distributions of several components simulta-neously within the same sample. Such determination permits labelingand pricing of different entities in a sample simultaneously in sortingfood products.Hyperspectral imaging is exible to choose any ROIswithin the image even after image acquisition, and any typical spectrumof a ROI or a pixel can be considered as a spectral signature and be savedin a spectral library. Due to its rich spectral and spatial information,hyperspectral imaging is competent in detection and discrimination ofdifferent objects even if they have similar colors, overlapped spectra,or morphological characteristics.

    Although there are many advantages for hyperspectral imaging,some disadvantages still needed to be solved before industrial appli-cation and are summarized as follows:

    Hyperspectral images contain much redundant data that poseconsiderable challenges for data mining, and hardware speed of thehyperspectral imaging system needs to be improved to satisfy therapid acquisition and analysis of the huge hyperspectral data cube.However, due to its long time data acquisition and analysis,hyperspectral imaging is not suggested for direct implementationin online application. A multispectral imaging system acquiring thespectral images only at several optimal wavelengths would bemore suitable to meet the speed requirement of quality inspection.Such optimized multispectral imaging systems have much lower di-mensionality than hyperspectral imaging systems, resulting in lessdata acquisition time. The optimal wavelengths can be determinedthrough analyzing the hyperspectral imaging data.Hyperspectral im-aging is an indirect method as spectroscopy. Both of them need accu-

    rate reference calibration and robust model transfer algorithms, and

  • ing but only spectroscopy, because the value of imaging lies in the abil-

    methods for the automatic grading and nutrition determination of

    13D. Wu, D.-W. Sun / Innovative Food Science and Emerging Technologies 19 (2013) 114food products. By combining spatial and spectral details together inone system, hyperspectral imaging technique can simultaneously ac-quire spatial images in many spectrally contiguous bands to form a3-D hyperspectral cube, and is considered to have the ability to com-plement advantages of spectroscopy and imaging techniques. Thepredicted values quality or safety attributes at pixel-level can thenbe used to generate the distribution map of the attribute, leading tobetter characterization and improved quality and safety evaluationresults. Currently, there are still many challenges facing the full ex-ploitation of this technique in terms of computation speed, limita-tions of hardware, and high cost. Therefore, hyperspectral imagingstudies are often geared towards identication of optimal wave-lengths to design low cost multispectral imaging system that willplay an important role in the food industry for real-time monitoringsystems for food safety and quality assessment.

    Acknowledgments

    The authorswould like to acknowledge thenancial support provid-ed by the Irish Research Council for Science, Engineering and Technolo-gy under the Government of Ireland Postdoctoral Fellowship scheme.

    References

    Antonucci, F., Pallottino, F., Paglia, G., Palma, A., D'Aquino, S., & Menesatti, P. (2011).Non-destructive estimation of Mandarin maturity status through portable VIS-NIRspectrophotometer. Food and Bioprocess Technology, 4(5), 809813.

    Ariana, D. P., & Lu, R. (2008a). Detection of internal defect in pickling cucumbers usinghyperspectral transmittance imaging. Transactions of the ASABE, 51(2), 705713.ity to visualize spatial heterogeneities in samples. A pointmeasurementusing spectrometer will get the same spectral information of the wholesample.Most food products have very strong absorption of light,makingthem opaque over a distance of about several millimeters in visible andnear infrared region. Lammertyn, Peirs, De Baerdemaeker, and Nicola(2000) calculated the light penetration depths in apple fruit. The depthswere up to 4 mmin the 700900 nmrange and between 2 and 3 mminthe 9001900 nm range. In another study, Hampton et al. (20022003)reached the maximum penetration depth of 13 mm into sh tissue.Moreover, the penetration depth of light in MIR region (usually a fewmicrometers) is much shorter than NIR. Therefore, hyperspectral imag-ing cannot detect the information of constituents deep inside the foodsample.The variation of temperature affects the water absorption spec-trum. As water is a main component of food products, there is a poten-tial heating effect for the measured hyperspectral images of food.

    7. Conclusions

    Hyperspectral imaging has been proved as a promising technolo-gy for rapid, efcient and reliable measurement of different qualityattributes and their spatial distribution, simultaneously, and there-fore can be used instead of human inspectors or wet chemicalship is usually established based on the mean spectrum of ROIwhere its reference quality value can be measured using the stan-dard method.Hyperspectral imaging is not suitable when the ROIwithin the surface of a sample is smaller than a pixel or the qualityattributes have no characteristic spectral absorption.The analysis ofliquids or homogenous samples does not need hyperspectral imag-do not have good detection limits compared to chemical-based ana-lytical methods. Moreover, as spectroscopy, hyperspectral imagingalso has the well-known problem of multicollinearity. Multivariateanalysis and variable selection are two ways to reduce the effect ofthis problem.Reference values of attributes cannot be measured ac-curately for every pixel within a sample. The quantitative relation-Ariana, D. P., & Lu, R. (2008b). Quality evaluation of pickling cucumbers using hyper-spectral reectance and transmittance imaging: Part I. Development of a prototype.Sensing and Instrumentation for Food Quality