module 11 digital image processing 26okt06 - wur 11 ‘digital image processing’ 11. ... in the...

20
Introduction Geo-Information Science Practical Manual Module 11 ‘Digital image processing’

Upload: dobao

Post on 14-May-2018

215 views

Category:

Documents


2 download

TRANSCRIPT

Introduction Geo-Information Science

Practical Manual

Module 11 ‘Digital image processing’

11. DIGITAL IMAGE PROCESSING

INTRODUCTION 11-1 START THE PROGRAM ERDAS IMAGINE 11-2 PART 1: DISPLAYING AN IMAGE DATA FILE 11-3

Display of DN-range 0 ... 255 (no stretch).........................................................11-3 Display after linear stretch of DN-range minimum ... maximum.......................11-6 Display after linear stretch of DN-range 40 ... 90 .............................................11-6 Display after standard deviation stretch............................................................11-7 Display of color composites ...............................................................................11-8

PART 2: SUPERVISED CLASSIFICATION 11-9

Examining land cover types using spectral profiles...........................................11-9 Digitizing training areas & estimation of signatures.......................................11-10

Collecting signatures.....................................................................................11-10 Evaluating Signatures ...................................................................................11-11

Land cover classification .................................................................................11-13 Minimum distance classification ..................................................................11-14 Maximum likelihood classification...............................................................11-14 Updating a color palette ................................................................................11-15

Exporting to ArcGIS file format .......................................................................11-15 LGN Database..................................................................................................11-15

IMAGE SOURCES 11-16 RELATED INTERNET SITES 11-17

MODULE 11 DIGITAL IMAGE PROCESSING 11-1

11. DIGITAL IMAGE PROCESSING

Introduction

The aim of the exercises in this module is to acquire a first experience in understanding remote sensing data by handling multi-spectral image data with the GIS and Remote Sensing package Leica Erdas Imagine for Windows. For the exercises, we will use image data of Wageningen and its surroundings (Figure 5). This is a subset of a much larger scene taken by the remote sensing satellite Landsat-5 TM (Thematic Mapper) on 11 July 1995. A spatial subset of the entire scene (185x185 km2), with seven spectral bands is available (Table 1).

TM band Spectral band ‘Color’ name 1 0.45...0.52 µm blue 2 0.52...0.60 µm green 3 0.63...0.69 µm red 4 0.76...0.90 µm near-infrared 5 1.55...1.75 µm mid-infrared 6 10.4...12.5 µm thermal-infrared 7 2.08...2.35 µm mid-infrared

Table 1. The seven spectral bands of the Landsat-5 TM sensor. The image covers an area of 15.3×15.3 km2 and consist of 510 columns × 510 rows. Each pixel represents an area of 30×30 m2. The sensor of band 6 observes pixels with a size of 120×120 m2. In this module you will practice with different image processing techniques including different display methods, the use of color composites and supervised classification. The results of a digital image classification can be used as input in a GIS. In this module: � An introduction to the software package Erdas Imagine. � Displaying an image data file: stretching and color composites. � Selecting training sites for classification. � Collecting spectral signatures of training sites. � Three supervised classification methods. Objectives After having completed this module you will be capable: � to understand the principle behind various image display techniques; � to perform a supervised classification with Erdas Imagine; � to describe the differences between three supervised classification methods. Erdas Imagine Images: Wag95.img, Meris_wag.img, Quickbird_27032002_rd.img Literature : Remote Sensing reader, Jan Clevers (Ed.)

MODULE 11 DIGITAL IMAGE PROCESSING 11-2

Start the program Erdas Imagine

INSTRUCTIONS:

1. Start the Erdas Imagine package. Click start, select Programs ���� Leica Geosystems GIS & Mapping ���� Erdas Imagine 8.7 ���� Erdas Imagine 8.7. Or click the Error! Objects cannot be created from editing field codes. icon on the desktop.

2. Click Session in the main menu bar (Figure 1), click preferences. 3. Set Default data folder to: D:\IGI\...*…\Erdas_imagine\data (*morning or afternoon). 4. Set Default output folder to: D:\IGI\...*…\Erdas_imagine\workspace (*morning or

afternoon). On top of the Erdas Imagine window you see the main menu bar (Figure 1). Clicking one of the items of the menu bar gives a pull down menu with a number of options.

Figure 1. The Erdas Imagine menu bar. Just below the menu bar you see the viewer (Figure 2). The menu and icons in the viewer can be used to open an image and applying basic viewer functions. If you move the cursor over the icons, you see a short indication of the function in the lower left corner of the viewer.

Figure 2. The Erdas Imagine ‘Viewer’, where images are displayed.. For questions about the tools you will use, you are encouraged to press the context sensitive help button in the dialogue box of the selected tool. In the dialogue boxes often default settings are given. In general they are used; if not, then you will be notified.

MODULE 11 DIGITAL IMAGE PROCESSING 11-3

PART 1: Displaying an IMAGE data file

In the Erdas package an image data file is usually stored in the unsigned 8-bit (or 1 byte) data type. This means that integer values from 0 to 255 can be stored. Pixel values are often called DN-values (Digital Number), being simply a value without a unit. They represent a distinct level of electromagnetic radiation received by the sensor. Speaking in terms of attribute scales, this type of data belongs to the ratio category. The image data file names have the extension .img which is accompanied by a .rrd file where the so-called pyramid layers are stored. These are used for fast zooming and panning in the image. In order to get familiar with image processing and remote sensing data we start with displaying and processing of one image data file. We use the image data of a Landsat-5 TM recording of band 4 (see Table 1) during this exercise. This band contains spectral information of a near-infrared band: 0.76-0.90 µm. You will find that different image stretching techniques of the same image data file produce different pictures on the screen. The following cases will be investigated:

• display of DN-range 0 ... 255; • display of DN-range minimum ... maximum; • display after linear stretch of DN-range 40 ... 80 ; • display after linear stretch with saturation; • displaying color composites.

Keep the resulting pictures on the screen to notice the differences!!

Display of DN-range 0 ... 255 (no stretch) INSTRUCTIONS:

1. Click in the viewer menu bar either File ���� Open ����

Raster layer or click the Open layer button . 2. Select image wag95.img. DO NOT OPEN THE

IMAGE YET!!! 3. Click the Raster Options tab (Figure 3), click the

Display as dropdown arrow and select Gray Scale, select Layer 4, and switch on the No Stretch option.

4. Press the OK button to open the image. This way, the Grey Scale Palette produces a picture on the screen with 256 grey tones. The range in grey tones is a linear scale from black (DN-value 0) to white (DN-value 255); each DN-value of the image will basically have its own grey value (Figure 4).

Figure 4. Principle of no stretch of image values (DN) into display levels.

Figure 3. The Raster Options tab.

MODULE 11 DIGITAL IMAGE PROCESSING 11-4

Since not all DN-values from 0 up to 255 are present in the original image, not all grey tones are used in the picture. Although you will recognize Wageningen and surroundings, the picture can be made brighter. But first you examine the DN distribution of the image. In order to examine the DN distribution of an image, display the histogram. INSTRUCTIONS:

1. Click the ImageInfo button Error! Objects cannot be created from editing field codes. in the standard toolbar or click Utility ���� Layer info.

2. The ImageInfo window opens, showing file, layer, statistics and map information. 3. Select layer 4. 4. Click the Histogram tab or the histogram button in the toolbar of the ImageInfo window. 5. If the cursor is placed inside the histogram, three vertical lines are displayed showing the

minimum, maximum and mean values. 1. a. What is plotted at the horizontal axis and what at the vertical axis? b. Write down the values for minimum, maximum, mean and standard deviation for band 4. When an image is displayed on the screen, the DN-values (File Pixel values) are translated to a grey tone (Lookup Table (LUT) -value). In case of an image displayed without stretch, the DN-value is the

same as the LUT-value (Figure 4). You can view these values with Inquire cursor (click or Utility ���� Inquire cursor ). It is important to zoom in to a level where you can distinguish the individual pixels. 2. a. Check the DN-values and LUT-values of water, grass, forest and heath land. You can find the location of these objects in Figure 5. Write down the values in Table 2 in the ‘no stretch’ columns. b. Which cover types has a DN-value of less than 30 in band 4? Explain this in terms of absorption/reflectance. c. Which two factors determine the grey tone of a pixel on the screen? d. Where is the origin of the column/row coordinate system?

Land cover type

DN-value (no stretch)

LUT-value (no stretch)

DN-value (linear stretch)

LUT-value (linear stretch)

Water

Grass

Forest

Heath land

Table 2. DN-values of land use types using different display techniques.

MODULE 11 DIGITAL IMAGE PROCESSING 11-5

Figure 5. Selected training fields in the Landsat TM band 5 scene of 11 July 1995 of the area around Wageningen.

MODULE 11 DIGITAL IMAGE PROCESSING 11-6

Display after linear stretch of DN-range minimum ... maximum Within Erdas Imagine an option is available to stretch the original DN-values of the image for display. The minimum DN-value of the image will be presented by the minimum grey tone (black) on the screen; the maximum DN-value will get the maximum grey tone (white) if the grey tone palette with 256 levels is used (Figure 6).

Figure 6. Principle of linear stretch of image values (DN) into display levels. The linear relationship between DN-value and LUT-value is in this example: LUT = 2.60*DN-156. INSTRUCTIONS:

1. To apply a linear stretch to your image, click in the viewer menu bar: Raster ���� Data scaling.

2. Make sure Linear selected in selected in the Binning field. 3. Replace the values for Min and Max with the minimum and maximum DN-values from the

image info. 4. Click OK.

3. a. Check the DN-values and LUT-values of the four land cover types again. Add these values to Table 2 in the ‘linear stretch’ columns. b. Can you explain the changes in LUT-values? Explain why some land cover types get a higher LUT value, while other land cover types get a lower LUT-value.

Display after linear stretch of DN-range 40 ... 90 Linear stretch of the minimum and maximum DN-values does improve the image somewhat compared to the image without stretch but contrast is still relatively low. The histogram shows that the majority of the DN-values are distributed between 40 and 90. You can gain more contrast in your image by emphasizing this DN-range on your screen. 4. Use the data scaling function to apply a linear stretch of the DN-range from 40 to 90. a. Which land cover types can you distinguish now with more grey tones in a smaller DN-range? b. Investigate the DN and LUT-values of the four land cover types. Explain the linear stretch principle.

MODULE 11 DIGITAL IMAGE PROCESSING 11-7

Display after standard deviation stretch Standard Deviation Stretch is based on the idea that image stretching for display in the DN-range of the minimum up to the maximum value may not give a good picture because the minimum and/or maximum value may be unfortunate extreme(s). When using this function, results in a linear stretch between -2 and +2 standard deviation from the average. In practice, this means that from both sides of the histogram 2.5% of the observations are skipped. As a result, single observations with very low or high values are ignored during the stretching. Standard Deviation Stretch is the default stretch function used in Erdas Imagine. INSTRUCTIONS:

1. Open a new viewer, select band 4 for a display in grey scale, but do not switch the ‘no stretch’ button on this time. The image will now be opened with Standard Deviation Stretch.

2. This stretch function can also be assessed through the menu bar. Click Raster ���� Contrast ���� Standard Deviation Stretch.

3. You can use Tile Viewers to put the viewers easily in one screen. Click in the viewer menu bar: View ���� Tile Viewers.

5. Open four viewers (click the viewer button in the main menu bar) and put the four pictures with different image stretching next to each other. a. Compare layer 4 of ‘wag95.img’ with the different stretching options. Which stretch function gives

in your opinion the best picture? Display layer 3 of ‘wag95.img’ according to your answer to exercise 5a in a new viewer. b. Which grey tone has grassland (see e.g. the meadows near the river) in band 3; is this different

from band 4? In what way? Explain the difference (remember the typical spectral signature of green vegetation).

6. Landsat-5 TM band 6 contains the thermal-infrared image data. Display layer 6 of ‘wag95.img’. You can re-open the image, or change the band which is displayed with a. Why is the image of band 6 so coarse? b. Which cover type has a relative low temperature and which one has a relatively high temperature? Close all viewers.

MODULE 11 DIGITAL IMAGE PROCESSING 11-8

Display of color composites Color composites of remote sensing data can be very helpful during investigation and interpretation in the field or for presentation purposes. Color composites have three spectral bands displayed simultaneously. INSTRUCTIONS:

1. Open a viewer. 2. Add a raster layer to the viewer. Click the Raster Options tab. 3. Display as: True Color. 4. Attach bands to the Red, Green and Blue colors. 5. Click OK. 6. When a color composite is opened, you can always change the band combination. Click

Raster ���� Band Combinations…. 7. Change the spectral bands for the three channels. If the Auto Apply box is ticked, band

changes appear immediately on screen. Note: the terminology used by Erdas Imagine may be confusing. The fact that you use the option ‘true color’ in the selection menu does not mean that you display a true-color image. This depends on the spectral bands you attach to the Red, Green and Blue band respectively. 7. Open three color composites of image ‘wag95.img’ with band combinations as described in Table 3. a. Why are the composites called true, false or pseudo color? b. Check the colors for the cover types water, forest and bare soil in each composite. Write your findings down in Table 4. c. Which band combination or color composite shows the largest contrast between the different land cover types? Why? Close all viewers.

Red Green Blue True Color 3 2 1 False Color 4 3 2 Pseudo Color e.g. 4 e.g. 5 e.g. 3

Table 3. Band combinations of three types of color composites.

Water Forest Bare soil True Color False Color Pseudo Color

Table 4. Land cover colors in each composite.

MODULE 11 DIGITAL IMAGE PROCESSING 11-9

PART 2: Supervised classification

Supervised classification is one of the techniques to transform remote sensing data into useful thematic information that could be used as input to a geographic information system. As a preparation for supervised classification, one decides beforehand which cover types must be classified and one selects proper training areas. These training areas are known cover types, based on field visits or general knowledge of parts of the area. Since we assume that you have some knowledge of the area around Wageningen, you will make several classifications without extensive fieldwork. Statistical characteristics of the spectral data of the selected training areas are set down in signature files. These signature files are then used by the classification method to derive the class boundaries for each cover type in the feature space. The actual classification of all pixels is performed in this feature space. The following activities will be executed:

• examining spectral profiles; • digitizing training areas; • estimation of signatures; • classifications; • updating a color palette (optional exercise)

8. a. Give a description of a 2-dimensional feature space. What is plotted on the axes of the feature space?

Examining land cover types using spectral profiles You will start the classification procedure by examining the spectral profiles of several land cover types. INSTRUCTIONS:

1. Open the spectral profile tool. Click in the viewer menu bar: Raster ���� Profile Tools…. 2. Click Spectral and click OK. The Spectral Profile window opens.

3. Click to activate the inquire tool 4. Click with the inquire cursor a land cover type in the image. The spectral profile of this pixel

will be drawn in the graph. The line represents the value of the selected pixel for each band (Figure 7).

5. To display wavelength on the x-axis click Edit ���� Use Sensor Attributes…. Click the Sensor type dropdown arrow and select landsattm.

9. Try to locate a few different land cover types (water, forest, agricultural land, and town) and show their spectral profiles in the graph. a. Which two bands show the largest difference in pixel value between water and vegetation?

MODULE 11 DIGITAL IMAGE PROCESSING 11-10

Figure 7. Spectral profiles of three land cover types.

Digitizing training areas & estimation of signatures During the first phase of the classification process you choose a band combination that shows a clear discrimination between most land cover types in order to digitize training fields of the cover types you are going to classify:

• grass; • bare soil; • deciduous forest; • pine forest; • heather; • maize; • town; • water.

Representative examples of these cover types are shown in figure 5. You will use user-defined polygons in the image for the selection of training samples Note: The training areas are in general small areas with at least 25 pixels. These areas should be chosen as pure (homogenous) as possible, so if you digitize e.g. a training site of water in the river, do not include the river borders! Collecting signatures INSTRUCTIONS:

1. Open a new viewer and display your most expressive composite (see your answer to exercise 7a) and zoom in to get a more detailed look at the picture during digitizing.

2. Click in the main menu bar and click Signature Editor.... A new window will be opened, move it so the area with the training fields can be seen clearly.

3. Click in the viewer menu bar AOI ���� Tools...

4. Click the AOI Tool palette button to create a polygon. Draw a polygon in one of the training areas (see figure 5). Digitize polygon points by clicking the LMB (Left Mouse button) and finish it by double clicking the LMB .

MODULE 11 DIGITAL IMAGE PROCESSING 11-11

5. Click in the Signature Editor the button to add the signature of the digitized training area to the signature list.

6. Give this signature a name according to the land cover (e.g. Water, Beets, Town, etc.). Notice that the color assigned to this class is the same as the color inside the AOI in the picture in default display (R=4; G=3; B=2). You can change the color combination if you wish.

10. a. Digitize the 8 training areas (7 indicated in figure 5 and the class town) according to the steps

described above, and add the signatures to the signature list. Save the signature file in the workspace folder located in the Erdas Imagine folder. Name the signature file ‘ wag95_your_name.sig’. Evaluating Signatures Before you perform a classification you need to study the signatures to get an accurate idea about the position and size of the classes in the feature space. You can present the results of the signature computation in a mean plot or histogram. You can compare the signatures of the different cover types; see if they are well separated. If not, then perhaps you did not choose the correct training area or it is a matter of different growth conditions or a registration error is made during field visit at the time of image recording. This way you can also get an idea if it is useful to perform the classification with all available bands. For this exercise you need a viewer with the source image wag95.img and the Signature Editor with wag95_your_name.sig. Mark the signature you want to investigate by clicking the row in the column with the >>>> mark. In the ERDAS IMAGINE package the signatures can be studied in different ways. Add statistical data INSTRUCTIONS:

1. Click in the Signature Editor window View ���� Columns…, the Viewer Signature Columns window opens.

2. Select all rows except red, green and blue, click Statistics… and click min, max and mean in the Column Statistics window.

3. Click Apply in the View Signature Columns window; close this window and the Column Statistics window.

4. If you move the slide bar in the Signature Editor window to the right and you will see that all statistical values appear.

11. a. Which spectral bands show the clearest (spectral) distinction between land use classes?

MODULE 11 DIGITAL IMAGE PROCESSING 11-12

Show the mean value(s) in a graph INSTRUCTIONS:

1. Click in the Signature Editor window View ���� Mean Plots..., the Signature Mean Plot window opens. Depending on the option you choose you can display either the marked signature or selected signatures or all signatures. You can select more than one signature by keeping the shift key down during selection in the signature editor.

Show histograms INSTRUCTIONS:

1. Click in the Signature Editor window View ���� Histograms..., the Histogram Plot Control Panel opens and simultaneously the histogram of the first band of the marked signature appears.

2. Select the classes you want to display in a histogram in the Signature Editor if you want to visualize multiple classes in one plot.

3. The chosen options in the Histogram Plot Control Panel are activated when you click the Plot... button.

12. a. Check the separability of the classes in all spectral bands by examining the histograms. b. Which bands can be used to differentiate between deciduous and pine forest? c. Which land use classes will be hard to distinguish? d. What is the consequence of poorly distinguishable spectral signatures during classification? c. Suppose you could only use three spectral bands for land use classification. Which three bands

would you choose?

MODULE 11 DIGITAL IMAGE PROCESSING 11-13

Land cover classification For classification of a remote sensing image, the ERDAS IMAGINE package is equipped with parametric and non-parametric decision rules. The difference between these decision rules will be treated in more detail during the course Remote Sensing (GRS 20306). For the classifications in this module, you will use the parametric decision rules ‘Minimum distance’ and ‘Maximum likelihood ’. INSTRUCTIONS:

1. The supervised classification is started from the Signature Editor. Click Classify ���� Supervised.... The Supervised Classification window opens (Figure 8).

2. Select the Input Raster File, this is the image you want to classify. 3. Select the Input Signature File, this is the file in which you stored the spectral signatures of

the training areas. 4. Give the output image a name in the Output File box. 5. Select classification decision rules: Non-parametric Rule, Overlap Rule, Unclassified Rule

and Parametric Rule. 6. Click OK.

Figure 8. The supervised classification window where you name the output files and set the decision rules. 13. a. Open the Supervised Classification window. Which Parametric Rules are available?

MODULE 11 DIGITAL IMAGE PROCESSING 11-14

Minimum distance classification Minimum distance classification method assigns a pixel to a land cover class based on the distance of the pixel to the center of the mean signature value of that class in the feature space. 14. Carry out a supervised classification of ‘wag95.img’ with the Minimum Distance (MD) rule. Name the output image wag95-MD.img. Use the following classification setting: Non-parametric Rule : None Overlap Rule : - Unclassified Rule : - Parametric Rule : Minimum Distance a. Display the classification result in a new viewer and notice that all pixels are classified. Note this

image has nothing to do with spectral reflectance. You are looking at a land cover map, where pixel values indicate a land cover class.

b. Table 5 lists four control points. Write down the land cover class of each control point in the ‘MD’

column. Use the inquire cursor tool to retrieve the value of the control point. The value corresponds to a land cover class.

Location Map X Map Y MD MLHD

1 435 -109

2 217 -267

3 454 -206

4 443 -395

Table 5. Results of different classification methods. Maximum likelihood classification The maximum likelihood classification method is based on the probability that a pixel belongs to a particular class. 15. Carry out a supervised classification of ‘wag95.img’ with the Maximum Likelihood (MLHD) rule. Name the output image wag95-MLHD.img. Use the following classification setting: Non-parametric Rule : None Overlap Rule : - Unclassified Rule : - Parametric Rule : Maximum likelihood a. Display the classification result in a new viewer. b. Write down the land cover class of each control point in the ‘MLHD’ column of Table 5. Use the

inquire cursor tool to retrieve the value of the control point. The value corresponds to a land cover class.

MODULE 11 DIGITAL IMAGE PROCESSING 11-15

Updating a color palette It might be that the colors of the different land cover classes are not well chosen. You can change these colors. Changing colors here is only possible for thematic data. INSTRUCTIONS:

1. Click in the viewer menu bar Raster ���� Attributes… . Select the layers (spectral bands) you want to use for classification. The Raster Attributes Editor opens.

2. Click a colored cell in the color column and select a color from the list. 16. Change the colors of one of the classification results to land use map colors (town = red, forest = dark green, water = blue, etc…).

Exporting to ArcGIS file format The classification result can be used for further analysis, which is usually done with GIS-software like ArcGIS. To avoid processing problems it is advised to export imagine files with “img” format to ArcGIS raster format. INSTRUCTIONS:

1. Click in the main menu bar Import. Select the classification result you want to use for further analysis. Export it to GRID format. Save it in the proper location.

2. Open the exported file in ArcGIS (ArcMap of ArcCatalog). 17. Export one of the land cover classification results to GRID format. Save the dataset in the ArcGIS workspace folder.

LGN Database Many remote sensing satellites collect Remote Sensing data. Image data from the Landsat-5 TM satellite is used, a comparable way you just did, to make land-use classifications for the "Landelijke Grondgebruiksclassificatie van Nederland" (LGN ). The LGN database covers The Netherlands and is created and updated at the Centre for Geo-Information of the Wageningen University and Research Centre (WUR). The data can be obtained from the Geodesk of the Centre for Geo-Information. The LGN database is updated on a regular basis and is very useful for all kinds of applications, e.g. for planning and for environmental scenario studies. A part containing the surroundings of Wageningen is copied from LGN4 (2000) and available in wag-lgn4.img. 18. Give a few reasons why wag-lng4 and your classification results are not exactly the same?

MODULE 11 DIGITAL IMAGE PROCESSING 11-16

Image sources

In the previous part of the exercise you investigated a Landsat TM image, with a pixel size of 30 meter and 7 spectral bands. However, there are numerous sensors with their own specifications. For which applications the recorded images can be used depends on the spatial, spectral and temporal resolution of the sensor. Compare three images of different sensors and describe the strong and weak points of each data-set. The available images are a MERIS , Landsat TM and Quickbird image, all covering the area around Wageningen. MERIS has a high spectral and radiometric resolution and a dual spatial resolution; 1200m and 300m.

MERIS Band Band centre (nm)

MERIS Band Band centre (nm)

MERIS Band Band centre (nm)

1 412.5 6 620 11 760

2 442.5 7 665 12 775

3 490 8 681.25 13 865

4 510 9 705 14 890

5 560 10 753.75 15 900

Table 6. The spectral bands of the MERIS sensor Quickbird images can be either panchromatic with a spatial resolution of 0.61m, or multi-spectral, which results in a pixel size of about 2.5m.

Quickbird band Spectral band ‘Colour’ name 1 0.45...0.52 µm blue 2 0.52...0.60 µm green 3 0.63...0.69 µm red 4 0.76...0.90 µm near-infrared

Table 7. The spectral bands of the Quickbird sensor INSTRUCTIONS:

1. Open a false color composite of the three images (Meris_wag.img, wag95.img and quickbird_27032003_rd.img) in separate viewers.

2. Investigate the spectral profile of several landcover types for the three images. 3. To display wavelength on the x-axis click Edit ���� Use Sensor Attributes…. Click the Sensor

type dropdown arrow and select MERIS, landsattm, or QuickbirdMS respectively. 19. a. Which bands did you select for each false color composite? b. Are the images geometrically and atmospherically corrected? How did you determine this? c. Discuss the strong and weak points of the three images in terms of spatial, spectral and temporal

resolution.

MODULE 11 DIGITAL IMAGE PROCESSING 11-17

d. Why is it not possible to build a space-born remote sensing sensor, which has a good spatial, spectral and temporal resolution?

Related internet sites

More concerning LGN is available at http://www.lgn.nl/ . The following paper gives an elaborate description about the creation of the LGN data base: http://www.dow.wur.nl/internet/webdocs/internet/geoinformatie/lgn/ISPRS_2000_LGN3.pdf For more concerning the basic principle and display of color composites, check: http://chesapeake.towson.edu/data/all_composite.asp

MODULE 11 DIGITAL IMAGE PROCESSING 11-18