geometric correction

59
Geometric Geometric Correction Correction Mahesh Kumar Jat Mahesh Kumar Jat Department of Civil Department of Civil Engineering Engineering Malaviya National Institute Malaviya National Institute of Technology, Jaipur of Technology, Jaipur

Upload: documentstory

Post on 28-Jun-2015

158 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Geometric correction

Geometric CorrectionGeometric CorrectionGeometric CorrectionGeometric Correction

Mahesh Kumar JatMahesh Kumar JatDepartment of Civil EngineeringDepartment of Civil Engineering

Malaviya National Institute of Technology, Malaviya National Institute of Technology, JaipurJaipur

Mahesh Kumar JatMahesh Kumar JatDepartment of Civil EngineeringDepartment of Civil Engineering

Malaviya National Institute of Technology, Malaviya National Institute of Technology, JaipurJaipur

Page 2: Geometric correction

Geometric Correction Geometric Correction Geometric Correction Geometric Correction

It is usually necessary to It is usually necessary to preprocesspreprocess remotely sensed data remotely sensed data and remove geometric distortion so that individual picture and remove geometric distortion so that individual picture elements (pixels) are in their proper elements (pixels) are in their proper planimetricplanimetric ( (x, yx, y) map ) map locations. This allows remote sensing–derived information locations. This allows remote sensing–derived information to be related to other thematic information in geographic to be related to other thematic information in geographic information systems (GIS) or spatial decision support information systems (GIS) or spatial decision support systems (SDSS). systems (SDSS). Geometrically corrected imageryGeometrically corrected imagery can be used to extract can be used to extract accurate distance, polygon area, and direction (bearing) accurate distance, polygon area, and direction (bearing) information.information.

It is usually necessary to It is usually necessary to preprocesspreprocess remotely sensed data remotely sensed data and remove geometric distortion so that individual picture and remove geometric distortion so that individual picture elements (pixels) are in their proper elements (pixels) are in their proper planimetricplanimetric ( (x, yx, y) map ) map locations. This allows remote sensing–derived information locations. This allows remote sensing–derived information to be related to other thematic information in geographic to be related to other thematic information in geographic information systems (GIS) or spatial decision support information systems (GIS) or spatial decision support systems (SDSS). systems (SDSS). Geometrically corrected imageryGeometrically corrected imagery can be used to extract can be used to extract accurate distance, polygon area, and direction (bearing) accurate distance, polygon area, and direction (bearing) information.information.

Page 3: Geometric correction

Remotely sensed imagery typically exhibits Remotely sensed imagery typically exhibits internalinternal and and externalexternal geometricgeometric errorerror..

It is important to recognize the source of the It is important to recognize the source of the internal and external error and whether it is internal and external error and whether it is systematicsystematic (predictable)(predictable) oror nonsystematicnonsystematic (random).(random).

Systematic geometric error is generally easier to Systematic geometric error is generally easier to identify and correct than random geometric error.identify and correct than random geometric error.

Remotely sensed imagery typically exhibits Remotely sensed imagery typically exhibits internalinternal and and externalexternal geometricgeometric errorerror..

It is important to recognize the source of the It is important to recognize the source of the internal and external error and whether it is internal and external error and whether it is systematicsystematic (predictable)(predictable) oror nonsystematicnonsystematic (random).(random).

Systematic geometric error is generally easier to Systematic geometric error is generally easier to identify and correct than random geometric error.identify and correct than random geometric error.

Internal and External Geometric Error Internal and External Geometric Error Internal and External Geometric Error Internal and External Geometric Error

Page 4: Geometric correction

Internal geometric errorsInternal geometric errors are introduced by the remote sensing are introduced by the remote sensing system itself or in combination with Earth rotation or curvature system itself or in combination with Earth rotation or curvature characteristics. These distortions are often characteristics. These distortions are often systematicsystematic (predictable) (predictable) and may be identified and corrected using pre-launch or in-flight and may be identified and corrected using pre-launch or in-flight platform ephemeris (i.e., information about the geometric platform ephemeris (i.e., information about the geometric characteristics of the sensor system and the Earth at the time of data characteristics of the sensor system and the Earth at the time of data acquisition). Geometric distortions in imagery that can sometimes be acquisition). Geometric distortions in imagery that can sometimes be corrected through analysis of sensor characteristics and ephemeris corrected through analysis of sensor characteristics and ephemeris data include: data include:

• skew skew caused by Earth rotation effectscaused by Earth rotation effects, , • scanning system–inducedscanning system–induced variation in ground resolution cell size, variation in ground resolution cell size,• scanning systemscanning system one-dimensional relief displacement one-dimensional relief displacement, and , and • scanning systemscanning system tangential scale distortion tangential scale distortion..

Internal geometric errorsInternal geometric errors are introduced by the remote sensing are introduced by the remote sensing system itself or in combination with Earth rotation or curvature system itself or in combination with Earth rotation or curvature characteristics. These distortions are often characteristics. These distortions are often systematicsystematic (predictable) (predictable) and may be identified and corrected using pre-launch or in-flight and may be identified and corrected using pre-launch or in-flight platform ephemeris (i.e., information about the geometric platform ephemeris (i.e., information about the geometric characteristics of the sensor system and the Earth at the time of data characteristics of the sensor system and the Earth at the time of data acquisition). Geometric distortions in imagery that can sometimes be acquisition). Geometric distortions in imagery that can sometimes be corrected through analysis of sensor characteristics and ephemeris corrected through analysis of sensor characteristics and ephemeris data include: data include:

• skew skew caused by Earth rotation effectscaused by Earth rotation effects, , • scanning system–inducedscanning system–induced variation in ground resolution cell size, variation in ground resolution cell size,• scanning systemscanning system one-dimensional relief displacement one-dimensional relief displacement, and , and • scanning systemscanning system tangential scale distortion tangential scale distortion..

Internal Geometric Error Internal Geometric Error Internal Geometric Error Internal Geometric Error

Page 5: Geometric correction

Image Offset (skew) caused Image Offset (skew) caused by Earth Rotation Effects by Earth Rotation Effects

Image Offset (skew) caused Image Offset (skew) caused by Earth Rotation Effects by Earth Rotation Effects

Earth-observing Sun-synchronous satellites are Earth-observing Sun-synchronous satellites are normally in fixed orbits that collect a path (or swath) normally in fixed orbits that collect a path (or swath) of imagery as the satellite makes its way from the of imagery as the satellite makes its way from the north to the south in descending mode. Meanwhile, north to the south in descending mode. Meanwhile, the Earth below rotates on its axis from west to east the Earth below rotates on its axis from west to east making one complete revolution every 24 hours. making one complete revolution every 24 hours. This interaction between the fixed orbital path of the This interaction between the fixed orbital path of the remote sensing system and the Earth’s rotation on its remote sensing system and the Earth’s rotation on its axis axis skewsskews the geometry of the imagery collected. the geometry of the imagery collected.

Earth-observing Sun-synchronous satellites are Earth-observing Sun-synchronous satellites are normally in fixed orbits that collect a path (or swath) normally in fixed orbits that collect a path (or swath) of imagery as the satellite makes its way from the of imagery as the satellite makes its way from the north to the south in descending mode. Meanwhile, north to the south in descending mode. Meanwhile, the Earth below rotates on its axis from west to east the Earth below rotates on its axis from west to east making one complete revolution every 24 hours. making one complete revolution every 24 hours. This interaction between the fixed orbital path of the This interaction between the fixed orbital path of the remote sensing system and the Earth’s rotation on its remote sensing system and the Earth’s rotation on its axis axis skewsskews the geometry of the imagery collected. the geometry of the imagery collected.

Page 6: Geometric correction

a)a) Landsat satellites 4, 5, and 7 are in a Sun- Landsat satellites 4, 5, and 7 are in a Sun-synchronous orbit with an angle of inclination synchronous orbit with an angle of inclination of 98.2of 98.2. The Earth rotates on its axis from . The Earth rotates on its axis from west to east as imagery is collected. west to east as imagery is collected. b)b) Pixels in three hypothetical scans Pixels in three hypothetical scans (consisting of 16 lines each) of Landsat TM (consisting of 16 lines each) of Landsat TM data. While the matrix (raster) may look data. While the matrix (raster) may look correct, it actually contains systematic correct, it actually contains systematic geometric distortion caused by the angular geometric distortion caused by the angular velocity of the satellite in its descending velocity of the satellite in its descending orbital path in conjunction with the surface orbital path in conjunction with the surface velocity of the Earth as it rotates on its axis velocity of the Earth as it rotates on its axis while collecting a frame of imagery. while collecting a frame of imagery. c)c) The result of adjusting ( The result of adjusting (deskewingdeskewing) the ) the original Landsat TM data to the west to original Landsat TM data to the west to compensate for Earth rotation effects. compensate for Earth rotation effects. Landsats 4, 5, and 7 use a bidirectional cross-Landsats 4, 5, and 7 use a bidirectional cross-track scanning mirror.track scanning mirror.

a)a) Landsat satellites 4, 5, and 7 are in a Sun- Landsat satellites 4, 5, and 7 are in a Sun-synchronous orbit with an angle of inclination synchronous orbit with an angle of inclination of 98.2of 98.2. The Earth rotates on its axis from . The Earth rotates on its axis from west to east as imagery is collected. west to east as imagery is collected. b)b) Pixels in three hypothetical scans Pixels in three hypothetical scans (consisting of 16 lines each) of Landsat TM (consisting of 16 lines each) of Landsat TM data. While the matrix (raster) may look data. While the matrix (raster) may look correct, it actually contains systematic correct, it actually contains systematic geometric distortion caused by the angular geometric distortion caused by the angular velocity of the satellite in its descending velocity of the satellite in its descending orbital path in conjunction with the surface orbital path in conjunction with the surface velocity of the Earth as it rotates on its axis velocity of the Earth as it rotates on its axis while collecting a frame of imagery. while collecting a frame of imagery. c)c) The result of adjusting ( The result of adjusting (deskewingdeskewing) the ) the original Landsat TM data to the west to original Landsat TM data to the west to compensate for Earth rotation effects. compensate for Earth rotation effects. Landsats 4, 5, and 7 use a bidirectional cross-Landsats 4, 5, and 7 use a bidirectional cross-track scanning mirror.track scanning mirror.

Image Skew Image Skew Image Skew Image Skew

Page 7: Geometric correction

Scanning System-induced Variation in Scanning System-induced Variation in Ground Resolution Cell SizeGround Resolution Cell Size

Scanning System-induced Variation in Scanning System-induced Variation in Ground Resolution Cell SizeGround Resolution Cell Size

An An orbitalorbital multispectral scanning system scans through just multispectral scanning system scans through just a few degrees off-nadir as it collects data hundreds of a few degrees off-nadir as it collects data hundreds of kilometers above the Earth’s surface (e.g., Landsat 7 data are kilometers above the Earth’s surface (e.g., Landsat 7 data are collected at 705 km AGL). This configuration minimizes the collected at 705 km AGL). This configuration minimizes the amount of distortion introduced by the scanning system. amount of distortion introduced by the scanning system. Conversely, a Conversely, a suborbitalsuborbital multispectral scanning system may multispectral scanning system may be operating just tens of kilometers AGL with a scan field of be operating just tens of kilometers AGL with a scan field of view of perhaps 70°. This introduces numerous types of view of perhaps 70°. This introduces numerous types of geometric distortion that can be difficult to correct.geometric distortion that can be difficult to correct.

An An orbitalorbital multispectral scanning system scans through just multispectral scanning system scans through just a few degrees off-nadir as it collects data hundreds of a few degrees off-nadir as it collects data hundreds of kilometers above the Earth’s surface (e.g., Landsat 7 data are kilometers above the Earth’s surface (e.g., Landsat 7 data are collected at 705 km AGL). This configuration minimizes the collected at 705 km AGL). This configuration minimizes the amount of distortion introduced by the scanning system. amount of distortion introduced by the scanning system. Conversely, a Conversely, a suborbitalsuborbital multispectral scanning system may multispectral scanning system may be operating just tens of kilometers AGL with a scan field of be operating just tens of kilometers AGL with a scan field of view of perhaps 70°. This introduces numerous types of view of perhaps 70°. This introduces numerous types of geometric distortion that can be difficult to correct.geometric distortion that can be difficult to correct.

Page 8: Geometric correction

The The ground resolution cell sizeground resolution cell size along a single across-track scan is along a single across-track scan is a function of a) the distance from a function of a) the distance from the aircraft to the observation the aircraft to the observation where where HH is the altitude of the is the altitude of the aircraft above ground level (AGL) aircraft above ground level (AGL) at nadir and at nadir and H secH sec f f off-nadir; b) off-nadir; b) the instantaneous-field-of-view of the instantaneous-field-of-view of the sensor,the sensor, bb, measured in radians; , measured in radians; and c) the scan angle off-nadir, and c) the scan angle off-nadir, ff. . Pixels off-nadir have semi-major Pixels off-nadir have semi-major and semi-minor axes (diameters) and semi-minor axes (diameters) that define the resolution cell size. that define the resolution cell size. The total field of view of one scan The total field of view of one scan line is line is qq. One-dimensional relief . One-dimensional relief displacement and tangential scale displacement and tangential scale distortion occur in the direction distortion occur in the direction perpendicular to the line of flight perpendicular to the line of flight and parallel with a line scan.and parallel with a line scan.

The The ground resolution cell sizeground resolution cell size along a single across-track scan is along a single across-track scan is a function of a) the distance from a function of a) the distance from the aircraft to the observation the aircraft to the observation where where HH is the altitude of the is the altitude of the aircraft above ground level (AGL) aircraft above ground level (AGL) at nadir and at nadir and H secH sec f f off-nadir; b) off-nadir; b) the instantaneous-field-of-view of the instantaneous-field-of-view of the sensor,the sensor, bb, measured in radians; , measured in radians; and c) the scan angle off-nadir, and c) the scan angle off-nadir, ff. . Pixels off-nadir have semi-major Pixels off-nadir have semi-major and semi-minor axes (diameters) and semi-minor axes (diameters) that define the resolution cell size. that define the resolution cell size. The total field of view of one scan The total field of view of one scan line is line is qq. One-dimensional relief . One-dimensional relief displacement and tangential scale displacement and tangential scale distortion occur in the direction distortion occur in the direction perpendicular to the line of flight perpendicular to the line of flight and parallel with a line scan.and parallel with a line scan.

Page 9: Geometric correction

Ground Swath WidthGround Swath WidthGround Swath WidthGround Swath Width

The The ground swath widthground swath width ( (gswgsw)) is the length of the is the length of the terrain strip remotely sensed by the system during terrain strip remotely sensed by the system during one complete across-track sweep of the scanning one complete across-track sweep of the scanning mirror. It is a function of the total angular field of mirror. It is a function of the total angular field of view of the sensor system, view of the sensor system, qq, and the altitude of the , and the altitude of the sensor system above ground level, sensor system above ground level, HH. It is computed . It is computed as:as:

The The ground swath widthground swath width ( (gswgsw)) is the length of the is the length of the terrain strip remotely sensed by the system during terrain strip remotely sensed by the system during one complete across-track sweep of the scanning one complete across-track sweep of the scanning mirror. It is a function of the total angular field of mirror. It is a function of the total angular field of view of the sensor system, view of the sensor system, qq, and the altitude of the , and the altitude of the sensor system above ground level, sensor system above ground level, HH. It is computed . It is computed as:as:

22

tan

Hgsw 22

tan

Hgsw

Page 10: Geometric correction

Ground Swath WidthGround Swath WidthGround Swath WidthGround Swath Width

The The ground swath widthground swath width of an across-track scanning system of an across-track scanning system with a 90with a 90 total field of view and an altitude above ground total field of view and an altitude above ground level of 6000 m would be 12,000 m:level of 6000 m would be 12,000 m:

The The ground swath widthground swath width of an across-track scanning system of an across-track scanning system with a 90with a 90 total field of view and an altitude above ground total field of view and an altitude above ground level of 6000 m would be 12,000 m:level of 6000 m would be 12,000 m:

260002

90tan

gsw 26000

2

90tan

gsw

260001 gsw 260001 gsw

mgsw 12000 mgsw 12000

Page 11: Geometric correction

The The ground resolution cell sizeground resolution cell size along a single across-track scan is a along a single across-track scan is a function of a) the distance from the function of a) the distance from the aircraft to the observation where aircraft to the observation where HH is is the altitude of the aircraft above the altitude of the aircraft above ground level (AGL) at nadir and ground level (AGL) at nadir and H H secsec f f off-nadir; b) the instantaneous- off-nadir; b) the instantaneous-field-of-view of the sensor,field-of-view of the sensor, bb, , measured in radians; and c) the scan measured in radians; and c) the scan angle off-nadir, angle off-nadir, ff. Pixels off-nadir . Pixels off-nadir have semimajor and semiminor axes have semimajor and semiminor axes (diameters) that define the resolution (diameters) that define the resolution cell size. The total field of view of cell size. The total field of view of one scan line is one scan line is qq. One-dimensional . One-dimensional relief displacement and tangential relief displacement and tangential scale distortion occur in the direction scale distortion occur in the direction perpendicular to the line of flight and perpendicular to the line of flight and parallel with a line scan.parallel with a line scan.

The The ground resolution cell sizeground resolution cell size along a single across-track scan is a along a single across-track scan is a function of a) the distance from the function of a) the distance from the aircraft to the observation where aircraft to the observation where HH is is the altitude of the aircraft above the altitude of the aircraft above ground level (AGL) at nadir and ground level (AGL) at nadir and H H secsec f f off-nadir; b) the instantaneous- off-nadir; b) the instantaneous-field-of-view of the sensor,field-of-view of the sensor, bb, , measured in radians; and c) the scan measured in radians; and c) the scan angle off-nadir, angle off-nadir, ff. Pixels off-nadir . Pixels off-nadir have semimajor and semiminor axes have semimajor and semiminor axes (diameters) that define the resolution (diameters) that define the resolution cell size. The total field of view of cell size. The total field of view of one scan line is one scan line is qq. One-dimensional . One-dimensional relief displacement and tangential relief displacement and tangential scale distortion occur in the direction scale distortion occur in the direction perpendicular to the line of flight and perpendicular to the line of flight and parallel with a line scan.parallel with a line scan.

Page 12: Geometric correction

Scanning System One-Dimensional Scanning System One-Dimensional Relief DisplacementRelief Displacement

Scanning System One-Dimensional Scanning System One-Dimensional Relief DisplacementRelief Displacement

Images acquired using an across-track scanning system also Images acquired using an across-track scanning system also contain relief displacement. However, instead of being radial from contain relief displacement. However, instead of being radial from a single principal point as in a vertical aerial photograph, the a single principal point as in a vertical aerial photograph, the displacement takes place in a direction that is perpendicular to the displacement takes place in a direction that is perpendicular to the line of flight for each and every scan line. In effect, the ground-line of flight for each and every scan line. In effect, the ground-resolution element at nadir functions like a principal point for each resolution element at nadir functions like a principal point for each scan line. At nadir, the scanning system looks directly down on a scan line. At nadir, the scanning system looks directly down on a tank, and it appears as a perfect circle. The greater the height of the tank, and it appears as a perfect circle. The greater the height of the object above the local terrain and the greater the distance of the top object above the local terrain and the greater the distance of the top of the object from nadir (i.e., the line of flight), the greater the of the object from nadir (i.e., the line of flight), the greater the amount of amount of one-dimensional relief displacementone-dimensional relief displacement present. One-present. One-dimensional relief displacement is introduced in both directions dimensional relief displacement is introduced in both directions away from nadir for each sweep of the across-track mirror.away from nadir for each sweep of the across-track mirror.

Images acquired using an across-track scanning system also Images acquired using an across-track scanning system also contain relief displacement. However, instead of being radial from contain relief displacement. However, instead of being radial from a single principal point as in a vertical aerial photograph, the a single principal point as in a vertical aerial photograph, the displacement takes place in a direction that is perpendicular to the displacement takes place in a direction that is perpendicular to the line of flight for each and every scan line. In effect, the ground-line of flight for each and every scan line. In effect, the ground-resolution element at nadir functions like a principal point for each resolution element at nadir functions like a principal point for each scan line. At nadir, the scanning system looks directly down on a scan line. At nadir, the scanning system looks directly down on a tank, and it appears as a perfect circle. The greater the height of the tank, and it appears as a perfect circle. The greater the height of the object above the local terrain and the greater the distance of the top object above the local terrain and the greater the distance of the top of the object from nadir (i.e., the line of flight), the greater the of the object from nadir (i.e., the line of flight), the greater the amount of amount of one-dimensional relief displacementone-dimensional relief displacement present. One-present. One-dimensional relief displacement is introduced in both directions dimensional relief displacement is introduced in both directions away from nadir for each sweep of the across-track mirror.away from nadir for each sweep of the across-track mirror.

Page 13: Geometric correction

a) Hypothetical perspective geometry of a vertical aerial photograph obtained over level terrain. Four a) Hypothetical perspective geometry of a vertical aerial photograph obtained over level terrain. Four 50-ft-tall tanks are distributed throughout the landscape and experience varying degrees of radial relief 50-ft-tall tanks are distributed throughout the landscape and experience varying degrees of radial relief displacement the farther they are from the principal point (PP). b) Across-track scanning system displacement the farther they are from the principal point (PP). b) Across-track scanning system introduces introduces one-dimensional relief displacementone-dimensional relief displacement perpendicular to the line of flight and perpendicular to the line of flight and tangential scale tangential scale distortion and compressiondistortion and compression the farther the object is from nadir. Linear features trending across the the farther the object is from nadir. Linear features trending across the terrain are often recorded with s-shaped or terrain are often recorded with s-shaped or sigmoid curvaturesigmoid curvature characteristics due to tangential scale characteristics due to tangential scale distortion and image compression.distortion and image compression.

a) Hypothetical perspective geometry of a vertical aerial photograph obtained over level terrain. Four a) Hypothetical perspective geometry of a vertical aerial photograph obtained over level terrain. Four 50-ft-tall tanks are distributed throughout the landscape and experience varying degrees of radial relief 50-ft-tall tanks are distributed throughout the landscape and experience varying degrees of radial relief displacement the farther they are from the principal point (PP). b) Across-track scanning system displacement the farther they are from the principal point (PP). b) Across-track scanning system introduces introduces one-dimensional relief displacementone-dimensional relief displacement perpendicular to the line of flight and perpendicular to the line of flight and tangential scale tangential scale distortion and compressiondistortion and compression the farther the object is from nadir. Linear features trending across the the farther the object is from nadir. Linear features trending across the terrain are often recorded with s-shaped or terrain are often recorded with s-shaped or sigmoid curvaturesigmoid curvature characteristics due to tangential scale characteristics due to tangential scale distortion and image compression.distortion and image compression.

Page 14: Geometric correction

Scanning System Tangential Scale DistortionScanning System Tangential Scale DistortionScanning System Tangential Scale DistortionScanning System Tangential Scale Distortion

The mirror on an across-track scanning system rotates at a constant speed and The mirror on an across-track scanning system rotates at a constant speed and typically views from 70° to 120typically views from 70° to 120 of terrain during a complete line scan. Of of terrain during a complete line scan. Of course, the amount depends on the specific sensor system. The terrain course, the amount depends on the specific sensor system. The terrain directly beneath the aircraft (at nadir) is closer to the aircraft than the terrain directly beneath the aircraft (at nadir) is closer to the aircraft than the terrain at the edges during a single sweep of the mirror. Therefore, because the at the edges during a single sweep of the mirror. Therefore, because the mirror rotates at a constant rate, the sensor scans a shorter geographic mirror rotates at a constant rate, the sensor scans a shorter geographic distance at nadir than it does at the edge of the image. This relationship tends distance at nadir than it does at the edge of the image. This relationship tends to to compresscompress features along an axis that is perpendicular to the line of flight. features along an axis that is perpendicular to the line of flight. The greater the distance of the ground-resolution cell from nadir, the greater The greater the distance of the ground-resolution cell from nadir, the greater the image scale compression. This is called the image scale compression. This is called tangential scale distortiontangential scale distortion. . Objects near nadir exhibit their proper shape. Objects near the edge of the Objects near nadir exhibit their proper shape. Objects near the edge of the flight line become compressed and their shape distorted. For example, flight line become compressed and their shape distorted. For example, consider the tangential geometric distortion and compression of the circular consider the tangential geometric distortion and compression of the circular swimming pools and one hectare of land the farther they are from nadir in the swimming pools and one hectare of land the farther they are from nadir in the hypothetical diagram.hypothetical diagram.

The mirror on an across-track scanning system rotates at a constant speed and The mirror on an across-track scanning system rotates at a constant speed and typically views from 70° to 120typically views from 70° to 120 of terrain during a complete line scan. Of of terrain during a complete line scan. Of course, the amount depends on the specific sensor system. The terrain course, the amount depends on the specific sensor system. The terrain directly beneath the aircraft (at nadir) is closer to the aircraft than the terrain directly beneath the aircraft (at nadir) is closer to the aircraft than the terrain at the edges during a single sweep of the mirror. Therefore, because the at the edges during a single sweep of the mirror. Therefore, because the mirror rotates at a constant rate, the sensor scans a shorter geographic mirror rotates at a constant rate, the sensor scans a shorter geographic distance at nadir than it does at the edge of the image. This relationship tends distance at nadir than it does at the edge of the image. This relationship tends to to compresscompress features along an axis that is perpendicular to the line of flight. features along an axis that is perpendicular to the line of flight. The greater the distance of the ground-resolution cell from nadir, the greater The greater the distance of the ground-resolution cell from nadir, the greater the image scale compression. This is called the image scale compression. This is called tangential scale distortiontangential scale distortion. . Objects near nadir exhibit their proper shape. Objects near the edge of the Objects near nadir exhibit their proper shape. Objects near the edge of the flight line become compressed and their shape distorted. For example, flight line become compressed and their shape distorted. For example, consider the tangential geometric distortion and compression of the circular consider the tangential geometric distortion and compression of the circular swimming pools and one hectare of land the farther they are from nadir in the swimming pools and one hectare of land the farther they are from nadir in the hypothetical diagram.hypothetical diagram.

Page 15: Geometric correction

Scanning System Tangential Scale DistortionScanning System Tangential Scale DistortionScanning System Tangential Scale DistortionScanning System Tangential Scale Distortion

The tangential scale distortion and compression in the far The tangential scale distortion and compression in the far range causes linear features such as roads, railroads, utility range causes linear features such as roads, railroads, utility right of ways, etc., to have an right of ways, etc., to have an s-shapes-shape or or sigmoid distortionsigmoid distortion when recorded on scanner imagery. Interestingly, if the when recorded on scanner imagery. Interestingly, if the linear feature is parallel with or perpendicular to the line of linear feature is parallel with or perpendicular to the line of flight, it does not experience sigmoid distortion.flight, it does not experience sigmoid distortion.

The tangential scale distortion and compression in the far The tangential scale distortion and compression in the far range causes linear features such as roads, railroads, utility range causes linear features such as roads, railroads, utility right of ways, etc., to have an right of ways, etc., to have an s-shapes-shape or or sigmoid distortionsigmoid distortion when recorded on scanner imagery. Interestingly, if the when recorded on scanner imagery. Interestingly, if the linear feature is parallel with or perpendicular to the line of linear feature is parallel with or perpendicular to the line of flight, it does not experience sigmoid distortion.flight, it does not experience sigmoid distortion.

Page 16: Geometric correction

a) Hypothetical perspective geometry of a vertical aerial photograph obtained over level terrain. Four a) Hypothetical perspective geometry of a vertical aerial photograph obtained over level terrain. Four 50-ft-tall tanks are distributed throughout the landscape and experience varying degrees of radial relief 50-ft-tall tanks are distributed throughout the landscape and experience varying degrees of radial relief displacement the farther they are from the principal point (PP). b) Across-track scanning system displacement the farther they are from the principal point (PP). b) Across-track scanning system introduces introduces one-dimensional relief displacementone-dimensional relief displacement perpendicular to the line of flight and perpendicular to the line of flight and tangential scale tangential scale distortion and compressiondistortion and compression the farther the object is from nadir. Linear features trending across the the farther the object is from nadir. Linear features trending across the terrain are often recorded with terrain are often recorded with ss-shaped or sigmoid curvature characteristics due to tangential scale -shaped or sigmoid curvature characteristics due to tangential scale distortion and image compression.distortion and image compression.

a) Hypothetical perspective geometry of a vertical aerial photograph obtained over level terrain. Four a) Hypothetical perspective geometry of a vertical aerial photograph obtained over level terrain. Four 50-ft-tall tanks are distributed throughout the landscape and experience varying degrees of radial relief 50-ft-tall tanks are distributed throughout the landscape and experience varying degrees of radial relief displacement the farther they are from the principal point (PP). b) Across-track scanning system displacement the farther they are from the principal point (PP). b) Across-track scanning system introduces introduces one-dimensional relief displacementone-dimensional relief displacement perpendicular to the line of flight and perpendicular to the line of flight and tangential scale tangential scale distortion and compressiondistortion and compression the farther the object is from nadir. Linear features trending across the the farther the object is from nadir. Linear features trending across the terrain are often recorded with terrain are often recorded with ss-shaped or sigmoid curvature characteristics due to tangential scale -shaped or sigmoid curvature characteristics due to tangential scale distortion and image compression.distortion and image compression.

Page 17: Geometric correction

External Geometric ErrorExternal Geometric ErrorExternal Geometric ErrorExternal Geometric Error

External geometric errorsExternal geometric errors are usually introduced by are usually introduced by phenomena that vary in nature through space and time. The phenomena that vary in nature through space and time. The most important external variables that can cause geometric most important external variables that can cause geometric error in remote sensor data are random movements by the error in remote sensor data are random movements by the aircraft (or spacecraft) at the exact time of data collection, aircraft (or spacecraft) at the exact time of data collection, which usually involve:which usually involve:

• altitudealtitude changes, and/or changes, and/or • attitudeattitude changes (roll, pitch, and yaw). changes (roll, pitch, and yaw).

External geometric errorsExternal geometric errors are usually introduced by are usually introduced by phenomena that vary in nature through space and time. The phenomena that vary in nature through space and time. The most important external variables that can cause geometric most important external variables that can cause geometric error in remote sensor data are random movements by the error in remote sensor data are random movements by the aircraft (or spacecraft) at the exact time of data collection, aircraft (or spacecraft) at the exact time of data collection, which usually involve:which usually involve:

• altitudealtitude changes, and/or changes, and/or • attitudeattitude changes (roll, pitch, and yaw). changes (roll, pitch, and yaw).

Page 18: Geometric correction

Attitude ChangesAttitude ChangesAttitude ChangesAttitude Changes

Remote sensing systems flown at a constant altitude above ground level (AGL) result Remote sensing systems flown at a constant altitude above ground level (AGL) result in imagery with a uniform scale all along the flightline. For example, a camera with a in imagery with a uniform scale all along the flightline. For example, a camera with a 12-in. focal length lens flown at 20,000 ft. AGL will yield 1:20,000-scale imagery. If 12-in. focal length lens flown at 20,000 ft. AGL will yield 1:20,000-scale imagery. If the aircraft or spacecraft gradually changes its altitude along a flightline, then the the aircraft or spacecraft gradually changes its altitude along a flightline, then the scale of the imagery will change. Increasing the altitude will result in smaller-scale scale of the imagery will change. Increasing the altitude will result in smaller-scale imagery (e.g., 1:25,000-scale). Decreasing the altitude of the sensor system will result imagery (e.g., 1:25,000-scale). Decreasing the altitude of the sensor system will result in larger-scale imagery (e.g, 1:15,000). The same relationship holds true for digital in larger-scale imagery (e.g, 1:15,000). The same relationship holds true for digital remote sensing systems collecting imagery on a pixel by pixel basis. remote sensing systems collecting imagery on a pixel by pixel basis.

The The diameter of the spot size on the grounddiameter of the spot size on the ground ( (DD; the nominal spatial resolution) is a ; the nominal spatial resolution) is a function of the function of the instantaneous-field-of-viewinstantaneous-field-of-view ( (bb) and the ) and the altitude above ground levelaltitude above ground level ((HH) of the sensor system, i.e.,) of the sensor system, i.e.,

Remote sensing systems flown at a constant altitude above ground level (AGL) result Remote sensing systems flown at a constant altitude above ground level (AGL) result in imagery with a uniform scale all along the flightline. For example, a camera with a in imagery with a uniform scale all along the flightline. For example, a camera with a 12-in. focal length lens flown at 20,000 ft. AGL will yield 1:20,000-scale imagery. If 12-in. focal length lens flown at 20,000 ft. AGL will yield 1:20,000-scale imagery. If the aircraft or spacecraft gradually changes its altitude along a flightline, then the the aircraft or spacecraft gradually changes its altitude along a flightline, then the scale of the imagery will change. Increasing the altitude will result in smaller-scale scale of the imagery will change. Increasing the altitude will result in smaller-scale imagery (e.g., 1:25,000-scale). Decreasing the altitude of the sensor system will result imagery (e.g., 1:25,000-scale). Decreasing the altitude of the sensor system will result in larger-scale imagery (e.g, 1:15,000). The same relationship holds true for digital in larger-scale imagery (e.g, 1:15,000). The same relationship holds true for digital remote sensing systems collecting imagery on a pixel by pixel basis. remote sensing systems collecting imagery on a pixel by pixel basis.

The The diameter of the spot size on the grounddiameter of the spot size on the ground ( (DD; the nominal spatial resolution) is a ; the nominal spatial resolution) is a function of the function of the instantaneous-field-of-viewinstantaneous-field-of-view ( (bb) and the ) and the altitude above ground levelaltitude above ground level ((HH) of the sensor system, i.e.,) of the sensor system, i.e.,

HD HD

Page 19: Geometric correction

a) Geometric modification in imagery may be a) Geometric modification in imagery may be introduced by changes in the aircraft or introduced by changes in the aircraft or satellite platform satellite platform altitudealtitude above ground level above ground level (AGL) at the time of data collection. (AGL) at the time of data collection. Increasing altitude results in smaller-scale Increasing altitude results in smaller-scale imagery while decreasing altitude results in imagery while decreasing altitude results in larger-scale imagery. larger-scale imagery. b) Geometric modification may also be b) Geometric modification may also be introduced by aircraft or spacecraft changes introduced by aircraft or spacecraft changes in in attitudeattitude,, including including rollroll, , pitchpitch, and , and yawyaw. An . An aircraft flies in the aircraft flies in the x-x-direction. direction. RollRoll occurs occurs when the aircraft or spacecraft fuselage when the aircraft or spacecraft fuselage maintains directional stability but the wings maintains directional stability but the wings move up or down, i.e. they rotate about the move up or down, i.e. they rotate about the xx--axis angle (omega: w). axis angle (omega: w). PitchPitch occurs when the occurs when the wings are stable but the fuselage nose or tail wings are stable but the fuselage nose or tail moves up or down, i.e., they rotate about themoves up or down, i.e., they rotate about the yy-axis angle (phi: f). -axis angle (phi: f). YawYaw occurs when the occurs when the wings remain parallel but the fuselage is wings remain parallel but the fuselage is forced by wind to be oriented some angle to forced by wind to be oriented some angle to the left or right of the intended line of flight, the left or right of the intended line of flight, i.e., it rotates about the z-axis angle (kappa: i.e., it rotates about the z-axis angle (kappa: k). Thus, the plane flies straight but all k). Thus, the plane flies straight but all remote sensor data are displaced by k. remote sensor data are displaced by k. Remote sensing data often are distorted due Remote sensing data often are distorted due to a combination of changes in to a combination of changes in altitudealtitude and and attitudeattitude (roll, pitch, and yaw). (roll, pitch, and yaw).

a) Geometric modification in imagery may be a) Geometric modification in imagery may be introduced by changes in the aircraft or introduced by changes in the aircraft or satellite platform satellite platform altitudealtitude above ground level above ground level (AGL) at the time of data collection. (AGL) at the time of data collection. Increasing altitude results in smaller-scale Increasing altitude results in smaller-scale imagery while decreasing altitude results in imagery while decreasing altitude results in larger-scale imagery. larger-scale imagery. b) Geometric modification may also be b) Geometric modification may also be introduced by aircraft or spacecraft changes introduced by aircraft or spacecraft changes in in attitudeattitude,, including including rollroll, , pitchpitch, and , and yawyaw. An . An aircraft flies in the aircraft flies in the x-x-direction. direction. RollRoll occurs occurs when the aircraft or spacecraft fuselage when the aircraft or spacecraft fuselage maintains directional stability but the wings maintains directional stability but the wings move up or down, i.e. they rotate about the move up or down, i.e. they rotate about the xx--axis angle (omega: w). axis angle (omega: w). PitchPitch occurs when the occurs when the wings are stable but the fuselage nose or tail wings are stable but the fuselage nose or tail moves up or down, i.e., they rotate about themoves up or down, i.e., they rotate about the yy-axis angle (phi: f). -axis angle (phi: f). YawYaw occurs when the occurs when the wings remain parallel but the fuselage is wings remain parallel but the fuselage is forced by wind to be oriented some angle to forced by wind to be oriented some angle to the left or right of the intended line of flight, the left or right of the intended line of flight, i.e., it rotates about the z-axis angle (kappa: i.e., it rotates about the z-axis angle (kappa: k). Thus, the plane flies straight but all k). Thus, the plane flies straight but all remote sensor data are displaced by k. remote sensor data are displaced by k. Remote sensing data often are distorted due Remote sensing data often are distorted due to a combination of changes in to a combination of changes in altitudealtitude and and attitudeattitude (roll, pitch, and yaw). (roll, pitch, and yaw).

Page 20: Geometric correction

Attitude ChangesAttitude ChangesAttitude ChangesAttitude Changes

Satellite platforms are usually stable because they are not buffeted by Satellite platforms are usually stable because they are not buffeted by atmospheric turbulence or wind. Conversely, suborbital aircraft must atmospheric turbulence or wind. Conversely, suborbital aircraft must constantly contend with atmospheric updrafts, downdrafts, head-constantly contend with atmospheric updrafts, downdrafts, head-winds, tail-winds, and cross-winds when collecting remote sensor data. winds, tail-winds, and cross-winds when collecting remote sensor data. Even when the remote sensing platform maintains a constant altitude Even when the remote sensing platform maintains a constant altitude AGL, it may rotate randomly about three separate axes that are AGL, it may rotate randomly about three separate axes that are commonly referred to as commonly referred to as rollroll, , pitchpitch, and , and yaw. yaw. Quality remote sensing systems often have Quality remote sensing systems often have gyro-stabilization gyro-stabilization equipmentequipment that isolates the sensor system from the roll and pitch that isolates the sensor system from the roll and pitch movements of the aircraft. Systems without stabilization equipment movements of the aircraft. Systems without stabilization equipment introduce some geometric error into the remote sensing dataset introduce some geometric error into the remote sensing dataset through variations in roll, pitch, and yaw that can only be corrected through variations in roll, pitch, and yaw that can only be corrected using using ground control pointsground control points..

Satellite platforms are usually stable because they are not buffeted by Satellite platforms are usually stable because they are not buffeted by atmospheric turbulence or wind. Conversely, suborbital aircraft must atmospheric turbulence or wind. Conversely, suborbital aircraft must constantly contend with atmospheric updrafts, downdrafts, head-constantly contend with atmospheric updrafts, downdrafts, head-winds, tail-winds, and cross-winds when collecting remote sensor data. winds, tail-winds, and cross-winds when collecting remote sensor data. Even when the remote sensing platform maintains a constant altitude Even when the remote sensing platform maintains a constant altitude AGL, it may rotate randomly about three separate axes that are AGL, it may rotate randomly about three separate axes that are commonly referred to as commonly referred to as rollroll, , pitchpitch, and , and yaw. yaw. Quality remote sensing systems often have Quality remote sensing systems often have gyro-stabilization gyro-stabilization equipmentequipment that isolates the sensor system from the roll and pitch that isolates the sensor system from the roll and pitch movements of the aircraft. Systems without stabilization equipment movements of the aircraft. Systems without stabilization equipment introduce some geometric error into the remote sensing dataset introduce some geometric error into the remote sensing dataset through variations in roll, pitch, and yaw that can only be corrected through variations in roll, pitch, and yaw that can only be corrected using using ground control pointsground control points..

Page 21: Geometric correction

a) Geometric modification in imagery may be a) Geometric modification in imagery may be introduced by changes in the aircraft or introduced by changes in the aircraft or satellite platform satellite platform altitudealtitude above ground level above ground level (AGL) at the time of data collection. (AGL) at the time of data collection. Increasing altitude results in smaller-scale Increasing altitude results in smaller-scale imagery while decreasing altitude results in imagery while decreasing altitude results in larger-scale imagery. larger-scale imagery. b) Geometric modification may also be b) Geometric modification may also be introduced by aircraft or spacecraft changes introduced by aircraft or spacecraft changes in in attitudeattitude,, including including rollroll, , pitchpitch, and , and yawyaw. An . An aircraft flies in the aircraft flies in the x-x-direction. direction. RollRoll occurs occurs when the aircraft or spacecraft fuselage when the aircraft or spacecraft fuselage maintains directional stability but the wings maintains directional stability but the wings move up or down, i.e. they rotate about the move up or down, i.e. they rotate about the xx--axis angle (omega: w). axis angle (omega: w). PitchPitch occurs when the occurs when the wings are stable but the fuselage nose or tail wings are stable but the fuselage nose or tail moves up or down, i.e., they rotate about themoves up or down, i.e., they rotate about the yy-axis angle (phi: f). -axis angle (phi: f). YawYaw occurs when the occurs when the wings remain parallel but the fuselage is wings remain parallel but the fuselage is forced by wind to be oriented some angle to forced by wind to be oriented some angle to the left or right of the intended line of flight, the left or right of the intended line of flight, i.e., it rotates about the z-axis angle (kappa: i.e., it rotates about the z-axis angle (kappa: k). Thus, the plane flies straight but all k). Thus, the plane flies straight but all remote sensor data are displaced by k. remote sensor data are displaced by k. Remote sensing data often are distorted due Remote sensing data often are distorted due to a combination of changes in to a combination of changes in altitudealtitude and and attitudeattitude (roll, pitch, and yaw). (roll, pitch, and yaw).

a) Geometric modification in imagery may be a) Geometric modification in imagery may be introduced by changes in the aircraft or introduced by changes in the aircraft or satellite platform satellite platform altitudealtitude above ground level above ground level (AGL) at the time of data collection. (AGL) at the time of data collection. Increasing altitude results in smaller-scale Increasing altitude results in smaller-scale imagery while decreasing altitude results in imagery while decreasing altitude results in larger-scale imagery. larger-scale imagery. b) Geometric modification may also be b) Geometric modification may also be introduced by aircraft or spacecraft changes introduced by aircraft or spacecraft changes in in attitudeattitude,, including including rollroll, , pitchpitch, and , and yawyaw. An . An aircraft flies in the aircraft flies in the x-x-direction. direction. RollRoll occurs occurs when the aircraft or spacecraft fuselage when the aircraft or spacecraft fuselage maintains directional stability but the wings maintains directional stability but the wings move up or down, i.e. they rotate about the move up or down, i.e. they rotate about the xx--axis angle (omega: w). axis angle (omega: w). PitchPitch occurs when the occurs when the wings are stable but the fuselage nose or tail wings are stable but the fuselage nose or tail moves up or down, i.e., they rotate about themoves up or down, i.e., they rotate about the yy-axis angle (phi: f). -axis angle (phi: f). YawYaw occurs when the occurs when the wings remain parallel but the fuselage is wings remain parallel but the fuselage is forced by wind to be oriented some angle to forced by wind to be oriented some angle to the left or right of the intended line of flight, the left or right of the intended line of flight, i.e., it rotates about the z-axis angle (kappa: i.e., it rotates about the z-axis angle (kappa: k). Thus, the plane flies straight but all k). Thus, the plane flies straight but all remote sensor data are displaced by k. remote sensor data are displaced by k. Remote sensing data often are distorted due Remote sensing data often are distorted due to a combination of changes in to a combination of changes in altitudealtitude and and attitudeattitude (roll, pitch, and yaw). (roll, pitch, and yaw).

Page 22: Geometric correction

Ground Control PointsGround Control PointsGround Control PointsGround Control Points

Geometric distortions introduced by sensor system Geometric distortions introduced by sensor system attitudeattitude (roll, pitch, and yaw) (roll, pitch, and yaw) and/or and/or altitudealtitude changes can be corrected using ground control points and appropriate changes can be corrected using ground control points and appropriate mathematical models. A mathematical models. A ground control pointground control point (GCP)(GCP) is a location on the surface of is a location on the surface of the Earth (e.g., a road intersection) that can be identified on the imagery and located the Earth (e.g., a road intersection) that can be identified on the imagery and located accurately on a map. The image analyst must be able to obtain two distinct sets of accurately on a map. The image analyst must be able to obtain two distinct sets of coordinates associated with each GCP:coordinates associated with each GCP:

• image coordinatesimage coordinates specified in specified in ii rows and rows and jj columns, and columns, and • map coordinatesmap coordinates (e.g., (e.g., x, yx, y measured in degrees of latitude and longitude, feet measured in degrees of latitude and longitude, feet in a state plane coordinate system, or meters in a Universal Transverse in a state plane coordinate system, or meters in a Universal Transverse Mercator projection). Mercator projection).

The paired coordinates (The paired coordinates (i, ji, j and and x, yx, y) from many GCPs (e.g., 20) can be modeled to ) from many GCPs (e.g., 20) can be modeled to derive derive geometric transformation coefficientsgeometric transformation coefficients. These coefficients may be used to . These coefficients may be used to geometrically rectify the remote sensor data to a standard datum and map projection.geometrically rectify the remote sensor data to a standard datum and map projection.

Geometric distortions introduced by sensor system Geometric distortions introduced by sensor system attitudeattitude (roll, pitch, and yaw) (roll, pitch, and yaw) and/or and/or altitudealtitude changes can be corrected using ground control points and appropriate changes can be corrected using ground control points and appropriate mathematical models. A mathematical models. A ground control pointground control point (GCP)(GCP) is a location on the surface of is a location on the surface of the Earth (e.g., a road intersection) that can be identified on the imagery and located the Earth (e.g., a road intersection) that can be identified on the imagery and located accurately on a map. The image analyst must be able to obtain two distinct sets of accurately on a map. The image analyst must be able to obtain two distinct sets of coordinates associated with each GCP:coordinates associated with each GCP:

• image coordinatesimage coordinates specified in specified in ii rows and rows and jj columns, and columns, and • map coordinatesmap coordinates (e.g., (e.g., x, yx, y measured in degrees of latitude and longitude, feet measured in degrees of latitude and longitude, feet in a state plane coordinate system, or meters in a Universal Transverse in a state plane coordinate system, or meters in a Universal Transverse Mercator projection). Mercator projection).

The paired coordinates (The paired coordinates (i, ji, j and and x, yx, y) from many GCPs (e.g., 20) can be modeled to ) from many GCPs (e.g., 20) can be modeled to derive derive geometric transformation coefficientsgeometric transformation coefficients. These coefficients may be used to . These coefficients may be used to geometrically rectify the remote sensor data to a standard datum and map projection.geometrically rectify the remote sensor data to a standard datum and map projection.

Page 23: Geometric correction

Ground Control PointsGround Control PointsGround Control PointsGround Control Points

Several alternatives to obtaining accurate Several alternatives to obtaining accurate ground control pointground control point (GCP) (GCP) map map coordinate information for image-to-map rectification include: coordinate information for image-to-map rectification include:

• hard-copy planimetric mapshard-copy planimetric maps (e.g., SOI topographic maps) where GCP (e.g., SOI topographic maps) where GCP coordinates are extracted using simple ruler measurements or a coordinates are extracted using simple ruler measurements or a coordinate digitizer;coordinate digitizer;

• digital planimetric mapsdigital planimetric maps (e.g., SOI topographic maps) where GCP (e.g., SOI topographic maps) where GCP coordinates are extracted directly from the digital map on the screen;coordinates are extracted directly from the digital map on the screen;

• digital orthophotoquadsdigital orthophotoquads that are already geometrically rectified (e.g., that are already geometrically rectified (e.g., SOI digital orthophoto quarter quadrangles —DOQQ); and/orSOI digital orthophoto quarter quadrangles —DOQQ); and/or

• global positioning system (GPS) instrumentsglobal positioning system (GPS) instruments that may be taken into that may be taken into the field to obtain the coordinates of objects to within the field to obtain the coordinates of objects to within ++20 cm if the 20 cm if the GPS data are differentially corrected.GPS data are differentially corrected.

Several alternatives to obtaining accurate Several alternatives to obtaining accurate ground control pointground control point (GCP) (GCP) map map coordinate information for image-to-map rectification include: coordinate information for image-to-map rectification include:

• hard-copy planimetric mapshard-copy planimetric maps (e.g., SOI topographic maps) where GCP (e.g., SOI topographic maps) where GCP coordinates are extracted using simple ruler measurements or a coordinates are extracted using simple ruler measurements or a coordinate digitizer;coordinate digitizer;

• digital planimetric mapsdigital planimetric maps (e.g., SOI topographic maps) where GCP (e.g., SOI topographic maps) where GCP coordinates are extracted directly from the digital map on the screen;coordinates are extracted directly from the digital map on the screen;

• digital orthophotoquadsdigital orthophotoquads that are already geometrically rectified (e.g., that are already geometrically rectified (e.g., SOI digital orthophoto quarter quadrangles —DOQQ); and/orSOI digital orthophoto quarter quadrangles —DOQQ); and/or

• global positioning system (GPS) instrumentsglobal positioning system (GPS) instruments that may be taken into that may be taken into the field to obtain the coordinates of objects to within the field to obtain the coordinates of objects to within ++20 cm if the 20 cm if the GPS data are differentially corrected.GPS data are differentially corrected.

Page 24: Geometric correction

Types of Geometric CorrectionTypes of Geometric CorrectionTypes of Geometric CorrectionTypes of Geometric Correction

Commercially remote sensor data (e.g., IRS Image, SPOT, Space Commercially remote sensor data (e.g., IRS Image, SPOT, Space Imaging) already have much of the Imaging) already have much of the systematic errorsystematic error removed. Unless removed. Unless otherwise processed, however, otherwise processed, however, unsystematic random errorunsystematic random error remains in remains in the image, making it non-planimetric (i.e., the pixels are not in their the image, making it non-planimetric (i.e., the pixels are not in their correct correct x, yx, y planimetric map position). Two common geometric planimetric map position). Two common geometric correction procedures are often used by scientists to make the digital correction procedures are often used by scientists to make the digital remote sensor data of value: remote sensor data of value:

• image-to-map rectificationimage-to-map rectification, and, and• image-to-image registrationimage-to-image registration..

The The general rule of thumbgeneral rule of thumb is to rectify remotely sensed data to a is to rectify remotely sensed data to a standard map projection whereby it may be used in conjunction with standard map projection whereby it may be used in conjunction with other spatial information in a GIS to solve problems. Therefore, most other spatial information in a GIS to solve problems. Therefore, most of the discussion will focus on of the discussion will focus on image-to-map rectificationimage-to-map rectification..

Commercially remote sensor data (e.g., IRS Image, SPOT, Space Commercially remote sensor data (e.g., IRS Image, SPOT, Space Imaging) already have much of the Imaging) already have much of the systematic errorsystematic error removed. Unless removed. Unless otherwise processed, however, otherwise processed, however, unsystematic random errorunsystematic random error remains in remains in the image, making it non-planimetric (i.e., the pixels are not in their the image, making it non-planimetric (i.e., the pixels are not in their correct correct x, yx, y planimetric map position). Two common geometric planimetric map position). Two common geometric correction procedures are often used by scientists to make the digital correction procedures are often used by scientists to make the digital remote sensor data of value: remote sensor data of value:

• image-to-map rectificationimage-to-map rectification, and, and• image-to-image registrationimage-to-image registration..

The The general rule of thumbgeneral rule of thumb is to rectify remotely sensed data to a is to rectify remotely sensed data to a standard map projection whereby it may be used in conjunction with standard map projection whereby it may be used in conjunction with other spatial information in a GIS to solve problems. Therefore, most other spatial information in a GIS to solve problems. Therefore, most of the discussion will focus on of the discussion will focus on image-to-map rectificationimage-to-map rectification..

Page 25: Geometric correction

Image to Map Rectification Image to Map Rectification Image to Map Rectification Image to Map Rectification

Image-to-map rectificationImage-to-map rectification is the process by which the is the process by which the geometry of an image is made planimetric. Whenever accurate geometry of an image is made planimetric. Whenever accurate area, direction, and distance measurements are required, area, direction, and distance measurements are required, image-to-map geometric rectification should be performed. It image-to-map geometric rectification should be performed. It may not, however, remove all the distortion caused by may not, however, remove all the distortion caused by topographic relief displacement in images. The topographic relief displacement in images. The image-to-map image-to-map rectificationrectification process normally involves selecting GCP image process normally involves selecting GCP image pixel coordinates (row and column) with their map coordinate pixel coordinates (row and column) with their map coordinate counterparts (e.g., meters northing and easting in a Universal counterparts (e.g., meters northing and easting in a Universal Transverse Mercator map projection). Transverse Mercator map projection).

Image-to-map rectificationImage-to-map rectification is the process by which the is the process by which the geometry of an image is made planimetric. Whenever accurate geometry of an image is made planimetric. Whenever accurate area, direction, and distance measurements are required, area, direction, and distance measurements are required, image-to-map geometric rectification should be performed. It image-to-map geometric rectification should be performed. It may not, however, remove all the distortion caused by may not, however, remove all the distortion caused by topographic relief displacement in images. The topographic relief displacement in images. The image-to-map image-to-map rectificationrectification process normally involves selecting GCP image process normally involves selecting GCP image pixel coordinates (row and column) with their map coordinate pixel coordinates (row and column) with their map coordinate counterparts (e.g., meters northing and easting in a Universal counterparts (e.g., meters northing and easting in a Universal Transverse Mercator map projection). Transverse Mercator map projection).

Page 26: Geometric correction

a) U.S. Geological Survey 7.5-minute 1:24,000-scale topographic map of Charleston, SC, with three ground a) U.S. Geological Survey 7.5-minute 1:24,000-scale topographic map of Charleston, SC, with three ground control points identified (13, 14, and 16). The GCP map coordinates are measured in meters easting (control points identified (13, 14, and 16). The GCP map coordinates are measured in meters easting ( xx) and ) and northing (northing (yy) in a Universal Transverse Mercator projection. b) Unrectified 11/09/82 Landsat TM band 4 image ) in a Universal Transverse Mercator projection. b) Unrectified 11/09/82 Landsat TM band 4 image with the three ground control points identified. The image GCP coordinates are measured in rows and columns.with the three ground control points identified. The image GCP coordinates are measured in rows and columns.

a) U.S. Geological Survey 7.5-minute 1:24,000-scale topographic map of Charleston, SC, with three ground a) U.S. Geological Survey 7.5-minute 1:24,000-scale topographic map of Charleston, SC, with three ground control points identified (13, 14, and 16). The GCP map coordinates are measured in meters easting (control points identified (13, 14, and 16). The GCP map coordinates are measured in meters easting ( xx) and ) and northing (northing (yy) in a Universal Transverse Mercator projection. b) Unrectified 11/09/82 Landsat TM band 4 image ) in a Universal Transverse Mercator projection. b) Unrectified 11/09/82 Landsat TM band 4 image with the three ground control points identified. The image GCP coordinates are measured in rows and columns.with the three ground control points identified. The image GCP coordinates are measured in rows and columns.

Page 27: Geometric correction

Image to Image Registration Image to Image Registration Image to Image Registration Image to Image Registration

Image-to-image registrationImage-to-image registration is the translation and rotation is the translation and rotation alignment process by which two images of like geometry and alignment process by which two images of like geometry and of the same geographic area are positioned coincident with of the same geographic area are positioned coincident with respect to one another so that corresponding elements of the respect to one another so that corresponding elements of the same ground area appear in the same place on the registered same ground area appear in the same place on the registered images. This type of geometric correction is used when it is images. This type of geometric correction is used when it is notnot necessary to have each pixel assigned a unique necessary to have each pixel assigned a unique x, yx, y coordinate in a map projection. For example, we might want coordinate in a map projection. For example, we might want to make a cursory examination of two images obtained on to make a cursory examination of two images obtained on different dates to see if any change has taken place. different dates to see if any change has taken place.

Image-to-image registrationImage-to-image registration is the translation and rotation is the translation and rotation alignment process by which two images of like geometry and alignment process by which two images of like geometry and of the same geographic area are positioned coincident with of the same geographic area are positioned coincident with respect to one another so that corresponding elements of the respect to one another so that corresponding elements of the same ground area appear in the same place on the registered same ground area appear in the same place on the registered images. This type of geometric correction is used when it is images. This type of geometric correction is used when it is notnot necessary to have each pixel assigned a unique necessary to have each pixel assigned a unique x, yx, y coordinate in a map projection. For example, we might want coordinate in a map projection. For example, we might want to make a cursory examination of two images obtained on to make a cursory examination of two images obtained on different dates to see if any change has taken place. different dates to see if any change has taken place.

Page 28: Geometric correction

Hybrid Approach to Image Hybrid Approach to Image Rectification/Registration Rectification/Registration

Hybrid Approach to Image Hybrid Approach to Image Rectification/Registration Rectification/Registration

The same general image processing principles are used in both image The same general image processing principles are used in both image rectification and image registration. The difference is that in image-rectification and image registration. The difference is that in image-to-map rectification the reference is a map in a standard map to-map rectification the reference is a map in a standard map projection, while in image-to-image registration the reference is projection, while in image-to-image registration the reference is another image. another image. If a rectified image is used as the reference base If a rectified image is used as the reference base (rather than a traditional map) any image registered to it will inherit (rather than a traditional map) any image registered to it will inherit the geometric errors existing in the reference image.the geometric errors existing in the reference image. Because of this Because of this characteristic, most serious Earth science remote sensing research is characteristic, most serious Earth science remote sensing research is based on analysis of data that have been rectified to a map base. based on analysis of data that have been rectified to a map base. However, when conducting rigorous change detection between two or However, when conducting rigorous change detection between two or more dates of remotely sensed data, it may be useful to select a more dates of remotely sensed data, it may be useful to select a hybridhybrid approach involving both image-to-map rectification and image-to-approach involving both image-to-map rectification and image-to-image registration.image registration.

The same general image processing principles are used in both image The same general image processing principles are used in both image rectification and image registration. The difference is that in image-rectification and image registration. The difference is that in image-to-map rectification the reference is a map in a standard map to-map rectification the reference is a map in a standard map projection, while in image-to-image registration the reference is projection, while in image-to-image registration the reference is another image. another image. If a rectified image is used as the reference base If a rectified image is used as the reference base (rather than a traditional map) any image registered to it will inherit (rather than a traditional map) any image registered to it will inherit the geometric errors existing in the reference image.the geometric errors existing in the reference image. Because of this Because of this characteristic, most serious Earth science remote sensing research is characteristic, most serious Earth science remote sensing research is based on analysis of data that have been rectified to a map base. based on analysis of data that have been rectified to a map base. However, when conducting rigorous change detection between two or However, when conducting rigorous change detection between two or more dates of remotely sensed data, it may be useful to select a more dates of remotely sensed data, it may be useful to select a hybridhybrid approach involving both image-to-map rectification and image-to-approach involving both image-to-map rectification and image-to-image registration.image registration.

Page 29: Geometric correction

a) Previously rectified Landsat TM band 4 data obtained on November 9, 1982, resampled to 30 a) Previously rectified Landsat TM band 4 data obtained on November 9, 1982, resampled to 30 30 30 m pixels using nearest-neighbor resampling logic and a UTM map projection. b) Unrectified October m pixels using nearest-neighbor resampling logic and a UTM map projection. b) Unrectified October 14, 1987, Landsat TM band 4 data to be registered to the rectified 1982 Landsat scene. 14, 1987, Landsat TM band 4 data to be registered to the rectified 1982 Landsat scene.

a) Previously rectified Landsat TM band 4 data obtained on November 9, 1982, resampled to 30 a) Previously rectified Landsat TM band 4 data obtained on November 9, 1982, resampled to 30 30 30 m pixels using nearest-neighbor resampling logic and a UTM map projection. b) Unrectified October m pixels using nearest-neighbor resampling logic and a UTM map projection. b) Unrectified October 14, 1987, Landsat TM band 4 data to be registered to the rectified 1982 Landsat scene. 14, 1987, Landsat TM band 4 data to be registered to the rectified 1982 Landsat scene.

Image to Image Hybrid Rectification Image to Image Hybrid Rectification Image to Image Hybrid Rectification Image to Image Hybrid Rectification

Page 30: Geometric correction

Image to Map Geometric Rectification LogicImage to Map Geometric Rectification LogicImage to Map Geometric Rectification LogicImage to Map Geometric Rectification Logic

Two basic operations must be performed to geometrically Two basic operations must be performed to geometrically rectify a remotely sensed image to a map coordinate rectify a remotely sensed image to a map coordinate system:system:

• Spatial interpolationSpatial interpolation, and, and• Intensity interpolationIntensity interpolation..

Two basic operations must be performed to geometrically Two basic operations must be performed to geometrically rectify a remotely sensed image to a map coordinate rectify a remotely sensed image to a map coordinate system:system:

• Spatial interpolationSpatial interpolation, and, and• Intensity interpolationIntensity interpolation..

Page 31: Geometric correction

Spatial InterpolationSpatial InterpolationSpatial InterpolationSpatial Interpolation

The geometric relationship between the input pixel The geometric relationship between the input pixel coordinates (column and row; referred to as coordinates (column and row; referred to as xx, y, y  ) and the ) and the associated map coordinates of this same point (associated map coordinates of this same point (x, yx, y) must be ) must be identified. A number of GCP pairs are used to establish the identified. A number of GCP pairs are used to establish the nature of the geometric coordinate transformation that must be nature of the geometric coordinate transformation that must be applied to rectify or fill every pixel in the output image (applied to rectify or fill every pixel in the output image (x, y) x, y) with a value from a pixel in the unrectified input image with a value from a pixel in the unrectified input image ((xx, y, y  ). This process is called ). This process is called spatial interpolationspatial interpolation..

The geometric relationship between the input pixel The geometric relationship between the input pixel coordinates (column and row; referred to as coordinates (column and row; referred to as xx, y, y  ) and the ) and the associated map coordinates of this same point (associated map coordinates of this same point (x, yx, y) must be ) must be identified. A number of GCP pairs are used to establish the identified. A number of GCP pairs are used to establish the nature of the geometric coordinate transformation that must be nature of the geometric coordinate transformation that must be applied to rectify or fill every pixel in the output image (applied to rectify or fill every pixel in the output image (x, y) x, y) with a value from a pixel in the unrectified input image with a value from a pixel in the unrectified input image ((xx, y, y  ). This process is called ). This process is called spatial interpolationspatial interpolation..

Page 32: Geometric correction

Intensity InterpolationIntensity InterpolationIntensity InterpolationIntensity Interpolation

Pixel brightness values must be determined. Unfortunately, Pixel brightness values must be determined. Unfortunately, there is no direct one-to-one relationship between the there is no direct one-to-one relationship between the movement of input pixel values to output pixel locations. It movement of input pixel values to output pixel locations. It will be shown that a pixel in the rectified output image often will be shown that a pixel in the rectified output image often requires a value from the input pixel grid that does not fall requires a value from the input pixel grid that does not fall neatly on a row-and-column coordinate. When this occurs, neatly on a row-and-column coordinate. When this occurs, there must be some mechanism for determining the brightness there must be some mechanism for determining the brightness value (value (BV BV ) to be assigned to the output rectified pixel. This ) to be assigned to the output rectified pixel. This process is called process is called intensity interpolationintensity interpolation..

Pixel brightness values must be determined. Unfortunately, Pixel brightness values must be determined. Unfortunately, there is no direct one-to-one relationship between the there is no direct one-to-one relationship between the movement of input pixel values to output pixel locations. It movement of input pixel values to output pixel locations. It will be shown that a pixel in the rectified output image often will be shown that a pixel in the rectified output image often requires a value from the input pixel grid that does not fall requires a value from the input pixel grid that does not fall neatly on a row-and-column coordinate. When this occurs, neatly on a row-and-column coordinate. When this occurs, there must be some mechanism for determining the brightness there must be some mechanism for determining the brightness value (value (BV BV ) to be assigned to the output rectified pixel. This ) to be assigned to the output rectified pixel. This process is called process is called intensity interpolationintensity interpolation..

Page 33: Geometric correction

Spatial Interpolation Using Coordinate TransformationsSpatial Interpolation Using Coordinate TransformationsSpatial Interpolation Using Coordinate TransformationsSpatial Interpolation Using Coordinate Transformations

Image-to-map rectificationImage-to-map rectification requires that polynomial equations be fit to requires that polynomial equations be fit to the GCP data using least-squares criteria to model the corrections the GCP data using least-squares criteria to model the corrections directly in the image domain without explicitly identifying the source of directly in the image domain without explicitly identifying the source of the distortion. Depending on the distortion in the imagery, the number of the distortion. Depending on the distortion in the imagery, the number of GCPs used, and the degree of topographic relief displacement in the GCPs used, and the degree of topographic relief displacement in the area, area, higher-order polynomial equationshigher-order polynomial equations may be required to may be required to geometrically correct the data. The geometrically correct the data. The orderorder of the rectification is simply of the rectification is simply the highest exponent used in the polynomial. the highest exponent used in the polynomial.

Image-to-map rectificationImage-to-map rectification requires that polynomial equations be fit to requires that polynomial equations be fit to the GCP data using least-squares criteria to model the corrections the GCP data using least-squares criteria to model the corrections directly in the image domain without explicitly identifying the source of directly in the image domain without explicitly identifying the source of the distortion. Depending on the distortion in the imagery, the number of the distortion. Depending on the distortion in the imagery, the number of GCPs used, and the degree of topographic relief displacement in the GCPs used, and the degree of topographic relief displacement in the area, area, higher-order polynomial equationshigher-order polynomial equations may be required to may be required to geometrically correct the data. The geometrically correct the data. The orderorder of the rectification is simply of the rectification is simply the highest exponent used in the polynomial. the highest exponent used in the polynomial.

Page 34: Geometric correction

Concept of how different-order Concept of how different-order transformations fit a transformations fit a hypothetical surface illustrated hypothetical surface illustrated in cross-section. in cross-section. a) a) OriginalOriginal observations. observations. b) b) First-order linear First-order linear transformationtransformation fits a fits a plane to the data. plane to the data. c) c) Second-order quadraticSecond-order quadratic fit. fit. d) d) Third-order cubicThird-order cubic fit. fit.

Concept of how different-order Concept of how different-order transformations fit a transformations fit a hypothetical surface illustrated hypothetical surface illustrated in cross-section. in cross-section. a) a) OriginalOriginal observations. observations. b) b) First-order linear First-order linear transformationtransformation fits a fits a plane to the data. plane to the data. c) c) Second-order quadraticSecond-order quadratic fit. fit. d) d) Third-order cubicThird-order cubic fit. fit.

Page 35: Geometric correction

NASA ATLAS near-infrared image of NASA ATLAS near-infrared image of Lake Murray, SC, obtained on October Lake Murray, SC, obtained on October 7, 1997, at a spatial resolution of 2 7, 1997, at a spatial resolution of 2 2 2 m. The image was m. The image was rectified using a rectified using a second-order polynomialsecond-order polynomial to adjust for to adjust for the significant geometric distortion in the the significant geometric distortion in the original dataset caused by the aircraft original dataset caused by the aircraft drifting off course during data collection.drifting off course during data collection.

NASA ATLAS near-infrared image of NASA ATLAS near-infrared image of Lake Murray, SC, obtained on October Lake Murray, SC, obtained on October 7, 1997, at a spatial resolution of 2 7, 1997, at a spatial resolution of 2 2 2 m. The image was m. The image was rectified using a rectified using a second-order polynomialsecond-order polynomial to adjust for to adjust for the significant geometric distortion in the the significant geometric distortion in the original dataset caused by the aircraft original dataset caused by the aircraft drifting off course during data collection.drifting off course during data collection.

Page 36: Geometric correction

Spatial Interpolation Using Coordinate TransformationsSpatial Interpolation Using Coordinate TransformationsSpatial Interpolation Using Coordinate TransformationsSpatial Interpolation Using Coordinate Transformations

Generally, for moderate distortions in a relatively small area of an image Generally, for moderate distortions in a relatively small area of an image (e.g., a quarter of a Landsat TM scene), a (e.g., a quarter of a Landsat TM scene), a first-order, six-parameter, first-order, six-parameter, affine (linear) transformationaffine (linear) transformation is sufficient to rectify the imagery to a is sufficient to rectify the imagery to a geographic frame of reference.geographic frame of reference.

This type of transformation can model six kinds of distortion in the This type of transformation can model six kinds of distortion in the remote sensor data, including:remote sensor data, including:

• translationtranslation in in xx andand yy,, • scalescale changes in changes in xx andand yy,, • skewskew, and , and • rotationrotation..

Generally, for moderate distortions in a relatively small area of an image Generally, for moderate distortions in a relatively small area of an image (e.g., a quarter of a Landsat TM scene), a (e.g., a quarter of a Landsat TM scene), a first-order, six-parameter, first-order, six-parameter, affine (linear) transformationaffine (linear) transformation is sufficient to rectify the imagery to a is sufficient to rectify the imagery to a geographic frame of reference.geographic frame of reference.

This type of transformation can model six kinds of distortion in the This type of transformation can model six kinds of distortion in the remote sensor data, including:remote sensor data, including:

• translationtranslation in in xx andand yy,, • scalescale changes in changes in xx andand yy,, • skewskew, and , and • rotationrotation..

Page 37: Geometric correction

Spatial Interpolation Using CoordinateTransformations:Spatial Interpolation Using CoordinateTransformations:Input-to-Output (Input-to-Output (ForwardForward) Mapping) Mapping

Spatial Interpolation Using CoordinateTransformations:Spatial Interpolation Using CoordinateTransformations:Input-to-Output (Input-to-Output (ForwardForward) Mapping) Mapping

When all six operations are combined into a single expression it becomes:

where x and y are positions in the output-rectified image or map, and x and y represent corresponding positions in the original input image. These two equations can be used to perform what is commonly referred to as input-to-output, or forward-mapping. The equations function according to the logic shown in the next figure. In this example, each pixel in the input grid (e.g., value 15 at x , y = 2, 3) is sent to an x,y location in the output image according to the six coefficients shown.

When all six operations are combined into a single expression it becomes:

where x and y are positions in the output-rectified image or map, and x and y represent corresponding positions in the original input image. These two equations can be used to perform what is commonly referred to as input-to-output, or forward-mapping. The equations function according to the logic shown in the next figure. In this example, each pixel in the input grid (e.g., value 15 at x , y = 2, 3) is sent to an x,y location in the output image according to the six coefficients shown.

ybxbby

yaxaax

210

210

ybxbby

yaxaax

210

210

Page 38: Geometric correction

a) The logic of filling a rectified output a) The logic of filling a rectified output matrix with values from an unrectified matrix with values from an unrectified input image matrix using input image matrix using input-to-input-to-outputoutput ( (forwardforward) mapping logic. ) mapping logic. b) The logic of filling a rectified b) The logic of filling a rectified output matrix with values from an output matrix with values from an unrectified input image matrix using unrectified input image matrix using output-to-input output-to-input ((inverseinverse) mapping ) mapping logic and nearest-neighbor resampling. logic and nearest-neighbor resampling.

Output-to-input inverseOutput-to-input inverse mapping logic mapping logic is the preferred methodology because is the preferred methodology because it results in a rectified output matrix it results in a rectified output matrix with values at every pixel location.with values at every pixel location.

a) The logic of filling a rectified output a) The logic of filling a rectified output matrix with values from an unrectified matrix with values from an unrectified input image matrix using input image matrix using input-to-input-to-outputoutput ( (forwardforward) mapping logic. ) mapping logic. b) The logic of filling a rectified b) The logic of filling a rectified output matrix with values from an output matrix with values from an unrectified input image matrix using unrectified input image matrix using output-to-input output-to-input ((inverseinverse) mapping ) mapping logic and nearest-neighbor resampling. logic and nearest-neighbor resampling.

Output-to-input inverseOutput-to-input inverse mapping logic mapping logic is the preferred methodology because is the preferred methodology because it results in a rectified output matrix it results in a rectified output matrix with values at every pixel location.with values at every pixel location.

Page 39: Geometric correction

Spatial Interpolation Using CoordinateTransformations:Spatial Interpolation Using CoordinateTransformations:Input-to-Output (Input-to-Output (ForwardForward) Mapping) Mapping

Spatial Interpolation Using CoordinateTransformations:Spatial Interpolation Using CoordinateTransformations:Input-to-Output (Input-to-Output (ForwardForward) Mapping) Mapping

Forward mapping logic works well if we are rectifying the location of discrete coordinates found along a linear feature such as a road in a vector map. In fact, cartographic mapping and geographic information systems typically rectify vector data using forward mapping logic. However, when we are trying to fill a rectified output grid (matrix) with values from an unrectified input image, forward mapping logic does not work well. The basic problem is that the six coefficients may require that value 15 from the x , y location 2, 3 in the input image be located at a floating point location in the output image at x, y = 5, 3.5, as shown. The output x, y location does not fall exactly on an integer x and y output map coordinate. In fact, using forward mapping logic can result in output matrix pixels with no output value. This is a serious condition and one that reduces the utility of the remote sensor data for useful applications. For this reason, most remotely sensed data are geometrically rectified using output-to-input or inverse mapping logic.

Forward mapping logic works well if we are rectifying the location of discrete coordinates found along a linear feature such as a road in a vector map. In fact, cartographic mapping and geographic information systems typically rectify vector data using forward mapping logic. However, when we are trying to fill a rectified output grid (matrix) with values from an unrectified input image, forward mapping logic does not work well. The basic problem is that the six coefficients may require that value 15 from the x , y location 2, 3 in the input image be located at a floating point location in the output image at x, y = 5, 3.5, as shown. The output x, y location does not fall exactly on an integer x and y output map coordinate. In fact, using forward mapping logic can result in output matrix pixels with no output value. This is a serious condition and one that reduces the utility of the remote sensor data for useful applications. For this reason, most remotely sensed data are geometrically rectified using output-to-input or inverse mapping logic.

Page 40: Geometric correction

Spatial Interpolation Using CoordinateTransformations:Spatial Interpolation Using CoordinateTransformations:Output-to-Input (Output-to-Input (InverseInverse) Mapping) Mapping

Spatial Interpolation Using CoordinateTransformations:Spatial Interpolation Using CoordinateTransformations:Output-to-Input (Output-to-Input (InverseInverse) Mapping) Mapping

Output-to-inputOutput-to-input,, or or inverseinverse mapping mapping logic, is based on the following logic, is based on the following two equations:two equations:

where where xx and and yy are positions in the are positions in the outputoutput-rectified image or map, and -rectified image or map, and xx and and yy represent corresponding positions in the original represent corresponding positions in the original inputinput image. image. The rectified The rectified outputoutput matrix consisting of matrix consisting of xx (column) and (column) and yy (row) (row) coordinates is filled in a systematic manner. coordinates is filled in a systematic manner.

Output-to-inputOutput-to-input,, or or inverseinverse mapping mapping logic, is based on the following logic, is based on the following two equations:two equations:

where where xx and and yy are positions in the are positions in the outputoutput-rectified image or map, and -rectified image or map, and xx and and yy represent corresponding positions in the original represent corresponding positions in the original inputinput image. image. The rectified The rectified outputoutput matrix consisting of matrix consisting of xx (column) and (column) and yy (row) (row) coordinates is filled in a systematic manner. coordinates is filled in a systematic manner.

ybxbby

yaxaax

210

210

'

'

ybxbby

yaxaax

210

210

'

'

Page 41: Geometric correction

a) The logic of filling a rectified output a) The logic of filling a rectified output matrix with values from an unrectified matrix with values from an unrectified input image matrix using input image matrix using input-to-input-to-outputoutput ( (forwardforward) mapping logic. ) mapping logic. b) The logic of filling a rectified b) The logic of filling a rectified output matrix with values from an output matrix with values from an unrectified input image matrix using unrectified input image matrix using output-to-input output-to-input ((inverseinverse) mapping ) mapping logic and nearest-neighbor resampling. logic and nearest-neighbor resampling.

Output-to-input inverseOutput-to-input inverse mapping logic mapping logic is the preferred methodology because is the preferred methodology because it results in a rectified output matrix it results in a rectified output matrix with values at every pixel location.with values at every pixel location.

a) The logic of filling a rectified output a) The logic of filling a rectified output matrix with values from an unrectified matrix with values from an unrectified input image matrix using input image matrix using input-to-input-to-outputoutput ( (forwardforward) mapping logic. ) mapping logic. b) The logic of filling a rectified b) The logic of filling a rectified output matrix with values from an output matrix with values from an unrectified input image matrix using unrectified input image matrix using output-to-input output-to-input ((inverseinverse) mapping ) mapping logic and nearest-neighbor resampling. logic and nearest-neighbor resampling.

Output-to-input inverseOutput-to-input inverse mapping logic mapping logic is the preferred methodology because is the preferred methodology because it results in a rectified output matrix it results in a rectified output matrix with values at every pixel location.with values at every pixel location.

Page 42: Geometric correction

Spatial Interpolation LogicSpatial Interpolation LogicSpatial Interpolation LogicSpatial Interpolation Logic

yxy

yxx

)0349150.0()005576.0(130162'

)005481.0(034187.02366.382'

yxy

yxx

)0349150.0()005576.0(130162'

)005481.0(034187.02366.382'

ybxbby

yaxaax

210

210

'

'

ybxbby

yaxaax

210

210

'

'

The goal is to fill a The goal is to fill a matrix that is in a matrix that is in a

standard map projection standard map projection with the appropriate with the appropriate values from a non-values from a non-planimetric image.planimetric image.

The goal is to fill a The goal is to fill a matrix that is in a matrix that is in a

standard map projection standard map projection with the appropriate with the appropriate values from a non-values from a non-planimetric image.planimetric image.

Page 43: Geometric correction

Compute the Root-Mean-Squared Error of the Compute the Root-Mean-Squared Error of the Inverse Mapping FunctionInverse Mapping Function

Compute the Root-Mean-Squared Error of the Compute the Root-Mean-Squared Error of the Inverse Mapping FunctionInverse Mapping Function

Using the six coordinate transform coefficients that model distortions in Using the six coordinate transform coefficients that model distortions in the original scene, it is possible to use the output-to-input (inverse) the original scene, it is possible to use the output-to-input (inverse) mapping logic to transfer (relocate) pixel values from the original mapping logic to transfer (relocate) pixel values from the original distorted image distorted image xx, , yy to the grid of the rectified output image, to the grid of the rectified output image, x, yx, y. . However, before applying the coefficients to create the rectified output However, before applying the coefficients to create the rectified output image, it is important to determine how well the six coefficients derived image, it is important to determine how well the six coefficients derived from the least-squares regression of the initial GCPs account for the from the least-squares regression of the initial GCPs account for the geometric distortion in the input image.geometric distortion in the input image. The method used most often The method used most often involves the computation of the involves the computation of the root-mean-square errorroot-mean-square error (RMS (RMSerrorerror)) for for

each of the ground control points.each of the ground control points.

Using the six coordinate transform coefficients that model distortions in Using the six coordinate transform coefficients that model distortions in the original scene, it is possible to use the output-to-input (inverse) the original scene, it is possible to use the output-to-input (inverse) mapping logic to transfer (relocate) pixel values from the original mapping logic to transfer (relocate) pixel values from the original distorted image distorted image xx, , yy to the grid of the rectified output image, to the grid of the rectified output image, x, yx, y. . However, before applying the coefficients to create the rectified output However, before applying the coefficients to create the rectified output image, it is important to determine how well the six coefficients derived image, it is important to determine how well the six coefficients derived from the least-squares regression of the initial GCPs account for the from the least-squares regression of the initial GCPs account for the geometric distortion in the input image.geometric distortion in the input image. The method used most often The method used most often involves the computation of the involves the computation of the root-mean-square errorroot-mean-square error (RMS (RMSerrorerror)) for for

each of the ground control points.each of the ground control points.

Page 44: Geometric correction

Spatial Interpolation Using Coordinate TransformationSpatial Interpolation Using Coordinate TransformationSpatial Interpolation Using Coordinate TransformationSpatial Interpolation Using Coordinate Transformation

22origorigerror yyxxRMS 22

origorigerror yyxxRMS

where:where:xxorigorig and and yyorigorig are are the are are the originaloriginal row and column coordinates of the GCP in row and column coordinates of the GCP in

the image and the image and x’x’ and and y’y’ are the are the computed or estimatedcomputed or estimated coordinates in the coordinates in the original image when we utilize the six coefficients. Basically, the closer these original image when we utilize the six coefficients. Basically, the closer these paired values are to one another, the more accurate the algorithm (and its paired values are to one another, the more accurate the algorithm (and its coefficients). The square root of the squared deviations represents a measure coefficients). The square root of the squared deviations represents a measure of the accuracy of each GCP. By computing of the accuracy of each GCP. By computing RMSRMSerrorerror for all GCPs, it is for all GCPs, it is

possible to (1) see which GCPs contribute the greatest error, and 2) sum all possible to (1) see which GCPs contribute the greatest error, and 2) sum all the the RMSRMSerrorerror..

where:where:xxorigorig and and yyorigorig are are the are are the originaloriginal row and column coordinates of the GCP in row and column coordinates of the GCP in

the image and the image and x’x’ and and y’y’ are the are the computed or estimatedcomputed or estimated coordinates in the coordinates in the original image when we utilize the six coefficients. Basically, the closer these original image when we utilize the six coefficients. Basically, the closer these paired values are to one another, the more accurate the algorithm (and its paired values are to one another, the more accurate the algorithm (and its coefficients). The square root of the squared deviations represents a measure coefficients). The square root of the squared deviations represents a measure of the accuracy of each GCP. By computing of the accuracy of each GCP. By computing RMSRMSerrorerror for all GCPs, it is for all GCPs, it is

possible to (1) see which GCPs contribute the greatest error, and 2) sum all possible to (1) see which GCPs contribute the greatest error, and 2) sum all the the RMSRMSerrorerror..

A way to measure the accuracy of a geometric rectification algorithm A way to measure the accuracy of a geometric rectification algorithm (actually, its coefficients) is to compute the (actually, its coefficients) is to compute the Root Mean Squared ErrorRoot Mean Squared Error ((RMSRMSerrorerror) for each ground control point using the equation:) for each ground control point using the equation:

A way to measure the accuracy of a geometric rectification algorithm A way to measure the accuracy of a geometric rectification algorithm (actually, its coefficients) is to compute the (actually, its coefficients) is to compute the Root Mean Squared ErrorRoot Mean Squared Error ((RMSRMSerrorerror) for each ground control point using the equation:) for each ground control point using the equation:

Page 45: Geometric correction

Spatial Interpolation Using Coordinate TransformationSpatial Interpolation Using Coordinate TransformationSpatial Interpolation Using Coordinate TransformationSpatial Interpolation Using Coordinate Transformation

All of the original GCPs selected are usually not used to compute the final six-parameter coefficients and constants used to rectify the input image. There is an iterative process that takes place. First, all of the original GCPs (e.g., 20 GCPs) are used to compute an initial set of six coefficients and constants. The root mean squared error (RMSE) associated with each of these initial 20 GCPs is computed and summed. Then, the individual GCPs that contributed the greatest amount of error are determined and deleted. After the first iteration, this might only leave 16 of 20 GCPs. A new set of coefficients is then computed using the16 GCPs. The process continues until the RMSE reaches a user-specified threshold (e.g., <1 pixel error in the x-direction and <1 pixel error in the y-direction). The goal is to remove the GCPs that introduce the most error into the multiple-regression coefficient computation. When the acceptable threshold is reached, the final coefficients and constants are used to rectify the input image to an output image in a standard map projection as previously discussed.

All of the original GCPs selected are usually not used to compute the final six-parameter coefficients and constants used to rectify the input image. There is an iterative process that takes place. First, all of the original GCPs (e.g., 20 GCPs) are used to compute an initial set of six coefficients and constants. The root mean squared error (RMSE) associated with each of these initial 20 GCPs is computed and summed. Then, the individual GCPs that contributed the greatest amount of error are determined and deleted. After the first iteration, this might only leave 16 of 20 GCPs. A new set of coefficients is then computed using the16 GCPs. The process continues until the RMSE reaches a user-specified threshold (e.g., <1 pixel error in the x-direction and <1 pixel error in the y-direction). The goal is to remove the GCPs that introduce the most error into the multiple-regression coefficient computation. When the acceptable threshold is reached, the final coefficients and constants are used to rectify the input image to an output image in a standard map projection as previously discussed.

Page 46: Geometric correction

Characteristics of Ground Control Points Characteristics of Ground Control Points Characteristics of Ground Control Points Characteristics of Ground Control Points

Point Point NumberNumber

Order of Order of Points Points

DeletedDeleted

Easting on Easting on MapMap

X1X1

Northing on Northing on MapMap

Y1Y1

X’ pixelX’ pixel Y’ PixelY’ Pixel Total RMS Total RMS error after error after this point this point deleteddeleted

11 1212 597120597120 3,627,0503,627,050 150150 185185 0.5010.501

22 99 597,680597,680 3,627,8003,627,800 166166 165165 0.6630.663

……....

2020 11 601,700601,700 3,632,5803,632,580 283283 1212 8.5428.542

Total RMS error with Total RMS error with allall 20 GCPs used: 20 GCPs used: 11.01611.016

If we delete If we delete GCP #20, GCP #20,

the RMSE the RMSE will be will be 8.452 8.452

If we delete If we delete GCP #20, GCP #20,

the RMSE the RMSE will be will be 8.452 8.452

Page 47: Geometric correction

Intensity InterpolationIntensity InterpolationIntensity InterpolationIntensity Interpolation

Intensity interpolationIntensity interpolation involves the extraction of a brightness value from an involves the extraction of a brightness value from an xx, y, y location in the original (distorted) input image and its relocation to the appropriate location in the original (distorted) input image and its relocation to the appropriate x, yx, y coordinate location in the rectified output image. This coordinate location in the rectified output image. This pixel-filling logicpixel-filling logic is used to is used to produce the output image line by line, column by column. Most of the time the produce the output image line by line, column by column. Most of the time the xx and and yy coordinates to be sampled in the input image are floating point numbers (i.e., they are coordinates to be sampled in the input image are floating point numbers (i.e., they are not integers). For example, in the Figure we see that pixel 5, 4 (not integers). For example, in the Figure we see that pixel 5, 4 (x, yx, y) in the output image ) in the output image is to be filled with the value from coordinates 2.4, 2.7 (is to be filled with the value from coordinates 2.4, 2.7 (xx, y, y  ) in the original input ) in the original input image. When this occurs, there are several methods of brightness value (image. When this occurs, there are several methods of brightness value (BVBV) ) intensity intensity interpolationinterpolation that can be applied, including: that can be applied, including:

• nearest neighbor, nearest neighbor, • bilinear interpolation, bilinear interpolation, andand • cubic convolution.cubic convolution.

The practice is commonly referred to as The practice is commonly referred to as resamplingresampling..

Intensity interpolationIntensity interpolation involves the extraction of a brightness value from an involves the extraction of a brightness value from an xx, y, y location in the original (distorted) input image and its relocation to the appropriate location in the original (distorted) input image and its relocation to the appropriate x, yx, y coordinate location in the rectified output image. This coordinate location in the rectified output image. This pixel-filling logicpixel-filling logic is used to is used to produce the output image line by line, column by column. Most of the time the produce the output image line by line, column by column. Most of the time the xx and and yy coordinates to be sampled in the input image are floating point numbers (i.e., they are coordinates to be sampled in the input image are floating point numbers (i.e., they are not integers). For example, in the Figure we see that pixel 5, 4 (not integers). For example, in the Figure we see that pixel 5, 4 (x, yx, y) in the output image ) in the output image is to be filled with the value from coordinates 2.4, 2.7 (is to be filled with the value from coordinates 2.4, 2.7 (xx, y, y  ) in the original input ) in the original input image. When this occurs, there are several methods of brightness value (image. When this occurs, there are several methods of brightness value (BVBV) ) intensity intensity interpolationinterpolation that can be applied, including: that can be applied, including:

• nearest neighbor, nearest neighbor, • bilinear interpolation, bilinear interpolation, andand • cubic convolution.cubic convolution.

The practice is commonly referred to as The practice is commonly referred to as resamplingresampling..

Page 48: Geometric correction

Nearest-Neighbor Resampling Nearest-Neighbor Resampling Nearest-Neighbor Resampling Nearest-Neighbor Resampling

The brightness value closest to the predicted The brightness value closest to the predicted x’x’,, y’y’ coordinate coordinate is assigned to the output is assigned to the output x, yx, y coordinate. coordinate.

The brightness value closest to the predicted The brightness value closest to the predicted x’x’,, y’y’ coordinate coordinate is assigned to the output is assigned to the output x, yx, y coordinate. coordinate.

Page 49: Geometric correction

Bilinear Interpolation Bilinear Interpolation Bilinear Interpolation Bilinear Interpolation

Assigns output pixel values by interpolating brightness values in two Assigns output pixel values by interpolating brightness values in two orthogonal direction in the input image. It basically fits a plane to the orthogonal direction in the input image. It basically fits a plane to the 44 pixel values nearest to the desired position (pixel values nearest to the desired position (x’x’, , y’y’) and then computes a new ) and then computes a new brightness value based on the weighted distances to these points. For brightness value based on the weighted distances to these points. For example, the distances from the requested (example, the distances from the requested (x’x’, , y’y’) position at 2.4, 2.7 in the ) position at 2.4, 2.7 in the input image to the closest four input pixel coordinates (2,2; 3,2; 2,3;3,3) are input image to the closest four input pixel coordinates (2,2; 3,2; 2,3;3,3) are computed . Also, the closer a pixel is to the desired computed . Also, the closer a pixel is to the desired x’,y’x’,y’ location, the location, the more weight it will have in the final computation of the average. more weight it will have in the final computation of the average.

Assigns output pixel values by interpolating brightness values in two Assigns output pixel values by interpolating brightness values in two orthogonal direction in the input image. It basically fits a plane to the orthogonal direction in the input image. It basically fits a plane to the 44 pixel values nearest to the desired position (pixel values nearest to the desired position (x’x’, , y’y’) and then computes a new ) and then computes a new brightness value based on the weighted distances to these points. For brightness value based on the weighted distances to these points. For example, the distances from the requested (example, the distances from the requested (x’x’, , y’y’) position at 2.4, 2.7 in the ) position at 2.4, 2.7 in the input image to the closest four input pixel coordinates (2,2; 3,2; 2,3;3,3) are input image to the closest four input pixel coordinates (2,2; 3,2; 2,3;3,3) are computed . Also, the closer a pixel is to the desired computed . Also, the closer a pixel is to the desired x’,y’x’,y’ location, the location, the more weight it will have in the final computation of the average. more weight it will have in the final computation of the average.

4

12

4

12

1

k k

k k

k

wt

D

DZ

BV

4

12

4

12

1

k k

k k

k

wt

D

DZ

BVwhere where ZZkk are the surrounding four data point values, are the surrounding four data point values,

and and DD22kk are the distances squared from the point in are the distances squared from the point in

question (question (x’x’, , y’y’)) to the these data points. to the these data points.

where where ZZkk are the surrounding four data point values, are the surrounding four data point values,

and and DD22kk are the distances squared from the point in are the distances squared from the point in

question (question (x’x’, , y’y’)) to the these data points. to the these data points.

Page 50: Geometric correction

Bilinear InterpolationBilinear InterpolationBilinear InterpolationBilinear Interpolation

Page 51: Geometric correction

Cubic Convolution Cubic Convolution Cubic Convolution Cubic Convolution

Assigns values to output pixels in much the same manner as bilinear Assigns values to output pixels in much the same manner as bilinear interpolation, except that the weighted values of interpolation, except that the weighted values of 1616 pixels surrounding pixels surrounding the location of the desired the location of the desired x’, y’x’, y’ pixel are used to determine the value pixel are used to determine the value of the output pixel.of the output pixel.

Assigns values to output pixels in much the same manner as bilinear Assigns values to output pixels in much the same manner as bilinear interpolation, except that the weighted values of interpolation, except that the weighted values of 1616 pixels surrounding pixels surrounding the location of the desired the location of the desired x’, y’x’, y’ pixel are used to determine the value pixel are used to determine the value of the output pixel.of the output pixel.

16

12

16

12

1

k k

k k

k

wt

D

D

Z

BV

16

12

16

12

1

k k

k k

k

wt

D

D

Z

BVwherewhere ZZkk are the surrounding four data point are the surrounding four data point

values, and values, and DD22kk are the distances squared from are the distances squared from

the point in question (the point in question (x’x’, , y’y’)) to the these data to the these data points. points.

wherewhere ZZkk are the surrounding four data point are the surrounding four data point

values, and values, and DD22kk are the distances squared from are the distances squared from

the point in question (the point in question (x’x’, , y’y’)) to the these data to the these data points. points.

Page 52: Geometric correction

Cubic ConvolutionCubic ConvolutionCubic ConvolutionCubic Convolution

Page 53: Geometric correction

Universal Transverse Universal Transverse Mercator (UTM)Mercator (UTM) grid zone with associated grid zone with associated parameters. This projection parameters. This projection is often used when is often used when rectifying remote sensor rectifying remote sensor data to a base map. It is data to a base map. It is found on U.S. Geological found on U.S. Geological Survey 7.5- and 15-minute Survey 7.5- and 15-minute quadrangles.quadrangles.

Universal Transverse Universal Transverse Mercator (UTM)Mercator (UTM) grid zone with associated grid zone with associated parameters. This projection parameters. This projection is often used when is often used when rectifying remote sensor rectifying remote sensor data to a base map. It is data to a base map. It is found on U.S. Geological found on U.S. Geological Survey 7.5- and 15-minute Survey 7.5- and 15-minute quadrangles.quadrangles.

Page 54: Geometric correction

Image MosaickingImage MosaickingImage MosaickingImage Mosaicking

Mosaicking Mosaicking nn rectified images requires several steps: rectified images requires several steps: 1. Individual images should be rectified to the same map 1. Individual images should be rectified to the same map projection and datum. Ideally, rectification of the projection and datum. Ideally, rectification of the nn images is images is performed using the same intensity interpolation resampling performed using the same intensity interpolation resampling logic (e.g., nearest-neighbor) and pixel size (e.g., multiple logic (e.g., nearest-neighbor) and pixel size (e.g., multiple Landsat TM scenes to be mosaicked are often resampled to 30 Landsat TM scenes to be mosaicked are often resampled to 30 30 m). 30 m).

Mosaicking Mosaicking nn rectified images requires several steps: rectified images requires several steps: 1. Individual images should be rectified to the same map 1. Individual images should be rectified to the same map projection and datum. Ideally, rectification of the projection and datum. Ideally, rectification of the nn images is images is performed using the same intensity interpolation resampling performed using the same intensity interpolation resampling logic (e.g., nearest-neighbor) and pixel size (e.g., multiple logic (e.g., nearest-neighbor) and pixel size (e.g., multiple Landsat TM scenes to be mosaicked are often resampled to 30 Landsat TM scenes to be mosaicked are often resampled to 30 30 m). 30 m).

Page 55: Geometric correction

Image MosaickingImage MosaickingImage MosaickingImage Mosaicking

2. One of the images to be mosaicked is designated as the 2. One of the images to be mosaicked is designated as the base imagebase image. . The base image and The base image and image 2image 2 will normally overlap a certain amount (e.g., will normally overlap a certain amount (e.g., 20% to 30%). 20% to 30%).

3. A representative 3. A representative geographic area in the overlap regiongeographic area in the overlap region is identified. is identified. This area in the base image is contrast stretched according to user This area in the base image is contrast stretched according to user specifications. The histogram of this geographic area in the base image is specifications. The histogram of this geographic area in the base image is extracted. The histogram from the base image is then applied to image 2 extracted. The histogram from the base image is then applied to image 2 using a using a histogram-matching algorithmhistogram-matching algorithm. This causes the two images to . This causes the two images to have approximately the same grayscale characteristics.have approximately the same grayscale characteristics.

2. One of the images to be mosaicked is designated as the 2. One of the images to be mosaicked is designated as the base imagebase image. . The base image and The base image and image 2image 2 will normally overlap a certain amount (e.g., will normally overlap a certain amount (e.g., 20% to 30%). 20% to 30%).

3. A representative 3. A representative geographic area in the overlap regiongeographic area in the overlap region is identified. is identified. This area in the base image is contrast stretched according to user This area in the base image is contrast stretched according to user specifications. The histogram of this geographic area in the base image is specifications. The histogram of this geographic area in the base image is extracted. The histogram from the base image is then applied to image 2 extracted. The histogram from the base image is then applied to image 2 using a using a histogram-matching algorithmhistogram-matching algorithm. This causes the two images to . This causes the two images to have approximately the same grayscale characteristics.have approximately the same grayscale characteristics.

Page 56: Geometric correction

Image MosaickingImage MosaickingImage MosaickingImage Mosaicking

4. It is possible to have the pixel brightness values in one scene simply 4. It is possible to have the pixel brightness values in one scene simply dominate the pixel values in the overlapping scene. Unfortunately, this dominate the pixel values in the overlapping scene. Unfortunately, this can result in noticeable seams in the final mosaic. Therefore, it is can result in noticeable seams in the final mosaic. Therefore, it is common to blend the seams between mosaicked images using common to blend the seams between mosaicked images using featheringfeathering. . Some digital image processing systems allow the user to specific a Some digital image processing systems allow the user to specific a feathering buffer distance (e.g., 200 pixels) wherein 0% of the base image feathering buffer distance (e.g., 200 pixels) wherein 0% of the base image is used in the blending at the edge and 100% of image 2 is used to make is used in the blending at the edge and 100% of image 2 is used to make the output image. At the specified distance (e.g., 200 pixels) in from the the output image. At the specified distance (e.g., 200 pixels) in from the edge, 100% of the base image is used to make the output image and 0% edge, 100% of the base image is used to make the output image and 0% of image 2 is used. At 100 pixels in from the edge, 50% of each image is of image 2 is used. At 100 pixels in from the edge, 50% of each image is used to make the output file.used to make the output file.

4. It is possible to have the pixel brightness values in one scene simply 4. It is possible to have the pixel brightness values in one scene simply dominate the pixel values in the overlapping scene. Unfortunately, this dominate the pixel values in the overlapping scene. Unfortunately, this can result in noticeable seams in the final mosaic. Therefore, it is can result in noticeable seams in the final mosaic. Therefore, it is common to blend the seams between mosaicked images using common to blend the seams between mosaicked images using featheringfeathering. . Some digital image processing systems allow the user to specific a Some digital image processing systems allow the user to specific a feathering buffer distance (e.g., 200 pixels) wherein 0% of the base image feathering buffer distance (e.g., 200 pixels) wherein 0% of the base image is used in the blending at the edge and 100% of image 2 is used to make is used in the blending at the edge and 100% of image 2 is used to make the output image. At the specified distance (e.g., 200 pixels) in from the the output image. At the specified distance (e.g., 200 pixels) in from the edge, 100% of the base image is used to make the output image and 0% edge, 100% of the base image is used to make the output image and 0% of image 2 is used. At 100 pixels in from the edge, 50% of each image is of image 2 is used. At 100 pixels in from the edge, 50% of each image is used to make the output file.used to make the output file.

Page 57: Geometric correction

MosaickingMosaickingMosaickingMosaicking

The seam between adjacent images The seam between adjacent images being mosaicked may be being mosaicked may be minimized using minimized using a) a) cut-line feathering logiccut-line feathering logic, or , or b) b) edge featheringedge feathering..

The seam between adjacent images The seam between adjacent images being mosaicked may be being mosaicked may be minimized using minimized using a) a) cut-line feathering logiccut-line feathering logic, or , or b) b) edge featheringedge feathering..

Page 58: Geometric correction

Image MosaickingImage MosaickingImage MosaickingImage Mosaicking

Sometimes analysts prefer to use a linear feature such as a river Sometimes analysts prefer to use a linear feature such as a river or road to subdue the edge between adjacent mosaicked images. or road to subdue the edge between adjacent mosaicked images. In this case, the analyst identifies a polyline in the image (using In this case, the analyst identifies a polyline in the image (using an annotation tool) and then specifies a buffer distance away an annotation tool) and then specifies a buffer distance away from the line as before where the feathering will take place. It is from the line as before where the feathering will take place. It is not absolutely necessary to use natural or man-made features not absolutely necessary to use natural or man-made features when performing cut-line feathering. Any user-specified polyline when performing cut-line feathering. Any user-specified polyline will do.will do.

Sometimes analysts prefer to use a linear feature such as a river Sometimes analysts prefer to use a linear feature such as a river or road to subdue the edge between adjacent mosaicked images. or road to subdue the edge between adjacent mosaicked images. In this case, the analyst identifies a polyline in the image (using In this case, the analyst identifies a polyline in the image (using an annotation tool) and then specifies a buffer distance away an annotation tool) and then specifies a buffer distance away from the line as before where the feathering will take place. It is from the line as before where the feathering will take place. It is not absolutely necessary to use natural or man-made features not absolutely necessary to use natural or man-made features when performing cut-line feathering. Any user-specified polyline when performing cut-line feathering. Any user-specified polyline will do.will do.

Page 59: Geometric correction

MosaickingMosaickingMosaickingMosaicking