ogundele oluwole

Upload: wolexy007

Post on 07-Apr-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/6/2019 Ogundele Oluwole

    1/21

    1 | P a g e

    PRE-PROCESSING OF REMOTE

    SENSING DATA

  • 8/6/2019 Ogundele Oluwole

    2/21

    2 | P a g e

    INTRODUCTION

    What is pre-processing?

    Pre-processing can be defined as the act of processing data beforehand. The data is

    analysed and appropriate steps are taken before the data is processed. Therefore, pre-

    processing of remote sensing data involves processing remote sensing data i.e. imageries

    before further analysis and information extraction is done. In pre-processing, corrections are

    applied to the remote sensing data.

    TYPES OF CORRECTION

    Pre-processing is grouped into two:

    - Radiometric correction- Geometric correction

    RADIOMETRIC CORRECTION

    Radiometric correction involves correcting data for sensor irregularities and unwanted sensor

    and atmospheric noise. This data is then converted so they accurately represent the reflected

    or emitted radiation measured by the sensor. It also involves the re-arrangement of the DN

    (Digital Number) in an image. The digital number is a number showing the degree of

    brightness in an image. This is done in order for all the areas of an image to have the same

    linear relationship between the DN and either radiance or back-scatter.

    Radiometric distortions are introduced by the atmosphere between the surface and the

    sensor. Scattering in the atmosphere causes fine detail in image data to be obscured, and the

    effect is larger at the edges of the swath (the path cut by a single sweep of the satellite).

    Scattering depends on wavelength and is also a function of relative humidity, atmospheric

    pressure, temperature and visibility (a measure of the concentration of larger particles or

    aerosols in the atmosphere).

    Radiometric correction is concerned with improving the accuracy of surface spectral

    reflectance, emittance or back- scattered measurements obtained using a remote sensing

    system. Brightness values inconsistencies caused by the sensors and environmental noise

    factors are balanced or normalized across and between image coverage and spectral bands.

    There are five primary objectives for applying radiometric corrections to digital

    remotely sensed data. Four of these reasons pertain to achieving consistency in relative image

  • 8/6/2019 Ogundele Oluwole

    3/21

    3 | P a g e

    brightness and one involves absolute quantification of brightness values. Relative

    correspondence of image brightness magnitudes may be desired for pixels: (1) within a single

    image (e.g., orbit segment or image frame), (2) between images (e.g. adjacent, overlapping

    frames), (3) between spectral band images, and (4) between image dates.

    Before listing the types of radiometric errors, it is essential to mention the error

    sources. Error sources include:

    - Internal errors: This kind of errors are introduced by the remote sensingsystem. They are generally systematic (predictable) and may be identified and then corrected

    based on pre-launch or in-flight calibration measurements. For example, n-line stripping in

    imagery may be caused by a single uncalibrated sensor. In many instances, radiometric

    correction adjusts for detector miscalibration.

    - External errors: They are introduced by phenomena that vary in naturethrough space and time. These include atmosphere, terrain elevation and slope. Some external

    errors may be corrected by relating empirical ground observations i.e. radiometric or

    geometric ground control points, to sensor measurements.

    Types of radiometric errors

    Types of radiometric errors include:

    - Sensor error (Internal error)- Atmospheric error (External error)- Topographic error (External error)

    Correcting sensor error

    Ideally, the radiance recorded by a remote sensing system in various bands is an

    accurate representation of the radiance actually leaving the feature of interest (e.g., soil,

    vegetation, atmosphere, water, or urban land cover) on the Earths surface or atmosphere.

    Unfortunately, noise(error) can enter the data acquisition system at several points. For

    example, radiometric error in remotely sensed data may be introduced by the sensor system

    itself when the individual detectors do not function properly or are improperly calibrated.

    Several of the more common remote sensing systeminduced radiometric errors include:

    - Random bad pixels (Shot noise)- Line start/stop problems

  • 8/6/2019 Ogundele Oluwole

    4/21

    4 | P a g e

    - Line or column drop-outs- Partial line or column drop outs- Line or column stripping

    Random bad pixels (Shot noise):

    Sometimes an individual detector does not record spectral data for an individual pixel. When

    this occurs randomly, it is called a bad pixel. When there are numerous random bad pixels

    found within the scene, it is called shot noise because it appears that the image was shot by a

    shotgun. Normally these bad pixels contain values of 0 or 255 (in 8-bit data) in one or more

    of the bands. Shot noise is identified and repaired using the following methodology. It is first

    necessary to locate each bad pixel in the band k dataset. A simple thresholding algorithm

    makes a pass through the dataset and flags any pixel (BVi,j,k) having a brightness value of zero

    (assuming values of 0 represent shot noise and not a real land cover such as water). Once

    identified, it is then possible to evaluate the eight pixels surrounding the flagged pixel, as

    shown below:

    8

    BVi,j,k= int BVi

    i=1

    8

    The above mathematical equation makes it possible to evaluate the eight pixels

    surrounding the bad pixel. This formula is used after the bad pixel must have been flagged

    i.e. noted.

  • 8/6/2019 Ogundele Oluwole

    5/21

    5 | P a g e

    a) Landsat Thematic Mapper band 7 (2.082.35 m) image of the Santee Delta in SouthCarolina. One of the 16 detectors exhibits serious striping and the absence ofbrightness values at pixel locations along a scan line. b) An enlarged view of the bad

  • 8/6/2019 Ogundele Oluwole

    6/21

    6 | P a g e

    pixels with the brightness values of the eight surrounding pixels annotated. c) The

    brightness values of the bad pixels after shot noise removal. This image was not

    destriped.

    Line start/stop problems:

    Occasionally, scanning systems fail to collect data at the beginning or end of a scan line, or

    they place the pixel data at inappropriate locations along the scan line. For example, all of the

    pixels in a scan line might be systematically shifted just one pixel to the right. This is called a

    line start problem. Also, a detector may abruptly stop collecting data somewhere along a scan

    and produce results similar to the line or column drop-out previously discussed. Ideally, when

    data are not collected, the sensor system would be programmed to remember what was not

    collected and place any good data in their proper geometric locations along the scan.

    Unfortunately, this is not always the case. For example, the first pixel (column 1) in band k

    on line i (i.e.,BV1,i,k) might be improperly located at column 50 (i.e.,BV50,i,k). If the line-start

    problem is always associated with a horizontal bias of 50 columns, it can be corrected using a

    simple horizontal adjustment. However, if the amount of the line-start displacement is

    random, it is difficult to restore the data without extensive human interaction on a line-by-line

    basis. A considerable amount of MSS data collected by Landsats2 and 3 exhibits line-start

    problems.

    Infrared imagery of the Four Mile Creek thermal effluent plume entering

    the Savannah River

  • 8/6/2019 Ogundele Oluwole

    7/21

    7 | P a g e

    Line or column drop-outs:

    An entire line containing no spectral information may be produced if an individual detector in

    a scanning system (e.g., Landsat MSS or Landsat7 ETM+) fails to function properly. If a

    detector in a linear array (e.g., SPOT XS, IRS-1C, QuickBird) fails to function, this can result

    in an entire column of data with no spectral information. The bad line or column is

    commonly called a line or column drop-out and contains brightness values equal to zero. For

    example, if one of the 16 detectors in the Landsat Thematic Mapper sensor system fails to

    function during scanning, this can result in a brightness value of zero for every pixel,j, in a

    particular line, i. This line drop-out would appear as a completely black line in the band, k, of

    imagery. This is a serious condition because there is no way to restore data that were never

    acquired. However, it is possible to improve the visual interpretability of the data by

    introducing estimated brightness values for each bad scan line.

    It is first necessary to locate each bad line in the dataset. A simple thresholding

    algorithm makes a pass through the dataset and flags any scan line having a mean brightness

    value at or near zero. Once identified, it is then possible to evaluate the output for a pixel in

    the preceding line (BVi1, j, k) and succeeding line (BVi+1, j, k) and assign the output pixel (BVi, j,

    k) in the drop-out line the average of these two brightness values

    BVi, j, k= int BVi1, j, k+ BVi+1, j, k

    2

    Partial line or column drop outs:

    This is similar to line or column drop outs but, in this case only a portion of the line or

    column is affected.

    Line or column stripping:

    Sometimes a detector does not fail completely, but simply goes out of radiometric

    adjustment. For example, a detector might record spectral measurements over a dark, deep

    body of water that are almost uniformly 20 brightness values greater than the other detectors

    for the same band. The result would be an image with systematic, noticeable lines that are

    brighter than adjacent lines. This is referred to as n-line stripping. The maladjusted linecontains valuable information, but should be corrected to have approximately the same

  • 8/6/2019 Ogundele Oluwole

    8/21

    8 | P a g e

    radiometric scale as the data collected by the properly calibrated detectors associated with the

    same band.

    To repair systematic n-line stripping, it is first necessary to identify the miscalibrated

    scan lines in the scene. This is usually accomplished by computing a histogram of the values

    for each of the n detectors that collected data over the entire scene (ideally, this would take

    place over a homogeneous area, such as a body of water). If one detectors mean or median is

    significantly different from the others, it is probable that this detector is out of adjustment.

    Consequently, every line and pixel in the scene recorded by the maladjusted detector may

    require a bias (additive or subtractive) correction or a more severe gain (multiplicative)

    correction. This type of n-line stripping correction a) adjusts all the bad scan lines so that they

    have approximately the same radiometric scale as the correctly collected data and b)

    improves the visual interpretability of the data. It looks better.

    To repair non-systematic stripping, there is no easy way.

    Correcting atmospheric error

    There are several ways to atmospherically correct remotely sensed data. Some are

    relatively straightforward while others are complex, being founded on physical principles and

    requiring a significant amount of information to function properly. This discussion will focus

    on two major types of atmospheric correction:

    - Absolute atmospheric correction- Relative atmospheric correction

    Absolute atmospheric correction:

    Solar radiation is largely unaffected as it travels through the vacuum of space. When

    it interacts with the Earths atmosphere, however, it is selectively scattered and absorbed. The

    sum of these two forms of energy loss is called atmospheric attenuation. Atmospheric

    attenuation may 1) make it difficult to relate hand-held in situ spectroradiometer

    measurements with remote measurements, 2) make it difficult to extend spectral signatures

    through space and time, and (3) have an impact on classification accuracy within a scene if

    atmospheric attenuation varies significantly throughout the image.

    The general goal of absolute radiometric correctionis to turn the digital brightnessvalues (or DN) recorded by a remote sensing system into scaled surface reflectance values.

    These values can then be compared or used in conjunction with scaled surface reflectancevalues obtained anywhere else on the planet.

  • 8/6/2019 Ogundele Oluwole

    9/21

    9 | P a g e

    Radiative transfer-based atmospheric correction algorithms

    Much research has been carried out to address the problem of correcting images for

    atmospheric effects. These efforts have resulted in a number of atmospheric radiative transfer

    codes (models) that can provide realistic estimates of the effects of atmospheric scattering

    and absorption on satellite imagery. Once these effects have been identified for a specific

    date of imagery, each band and/or pixel in the scene can be adjusted to remove the effects of

    scattering and/or absorption. The image is then considered to be atmospherically corrected.

    Unfortunately, the application of these codes to a specific scene and date also requires

    knowledge of both the sensor spectral profile and the atmospheric properties at the same

    time. Atmospheric properties are difficult to acquire even when planned. For most historic

    satellite data, they are not available. Even today, accurate scaled surface reflectance retrieval

    is not operational for the majority of satellite image sources used for land-cover change

    detection. An exception is NASA's Moderate Resolution Imaging Spectroradiometer

    (MODIS), for which surface reflectance products are available.

    Most current radiative transfer-based atmospheric correction algorithmscan compute

    much of the required information if a) the user provides fundamental atmospheric

    characteristic information to the program or b) certain atmospheric absorption bands are

    present in the remote sensing dataset. For example, most radiative transfer-based atmospheric

    correction algorithms require that the user provide:

    - latitude and longitude of the remotely sensed image scene- date and exact time of remote sensing data collection- image acquisition altitude (e.g., 20 km AGL)-

    mean elevation of the scene (e.g., 200 m ASL)- an atmospheric model (e.g., mid-latitude summer, mid-latitude winter, tropical)- radiometrically calibrated image radiance data (i.e., data must be in the form W m2

    mm-1 sr-1)

    - data about each specific band (i.e., its mean and full-width at half-maximum (FWHM)- local atmospheric visibility at the time of remote sensing data collection (e.g., 10 km,

    obtained from a nearby airport if possible).

  • 8/6/2019 Ogundele Oluwole

    10/21

    10 | P a g e

    These parameters are then input to the atmospheric model selected (e.g., mid-latitude

    summer) and used to compute the absorption and scattering characteristics of the atmosphere

    at the instance of remote sensing data collection. These atmospheric characteristics are then

    used to invert the remote sensing radiance to scaled surface reflectance. Many of these

    atmospheric correction programs derive the scattering and absorption information they

    require from robust atmosphere radiative transfer code such as MODTRAN 4+ or Second

    Simulation of the Satellite Signal in the Solar Spectrum (6S).

    Examples of these atmospheric correction programs include

    ACORN, ATCOR, ATREM, FLASH etc.

    a) Image containing substantial haze prior to atmospheric correction. b) Image afteratmospheric correction using ATCOR (Courtesy Leica Geosystems and DLR, the

    German Aerospace Centre).

    Empirical line calibration

    Absolute atmospheric correction may also be performed using empirical line

    calibration (ELC),which forces the remote sensing image data to match in situ spectral

    reflectance measurements, hopefully obtained at approximately the same time and on the

    same date as the remote sensing overflight. Empirical line calibration is based on the

    equation:

    Reflectance (field spectrum) = gain x radiance (image) + offset

    Relative atmospheric correction:

  • 8/6/2019 Ogundele Oluwole

    11/21

    11 | P a g e

    Relative atmospheric correction is done when the required data is not available for absolute

    atmospheric. Relative atmospheric correction may be used for

    - Single-image normalization using histogram adjustment.- Multiple-data image normalization using regression.

    Single-image normalization using histogram adjustment :

    The method is based on the fact that infrared data (>0.7 m) is free of atmospheric

    scattering effects, whereas the visible region (0.4-0.7 m) is strongly influenced by them.

    Use Dark Subtract to apply atmospheric scattering corrections to the image data. The

    digital number to subtract from each band can be either the band minimum, an average based

    upon a user defined region of interest, or a specific value.

    Multiple-data image normalization using regression:

    This involves selecting a base image and then transforming the spectral characteristics

    of all other images obtained on different dates to have approximately the same radiometric

    scale as the based image.

    Selecting pseudo-invariant features (PIFs) or region (points) of interest is important.

    Important things to note include:

    - Spectral characteristic of PIFs change very little through time (deep water body, baresoil, rooftop).

    - PIFs should be in the same elevation as others.- No or rare vegetation.- The PIF must be relatively flat

    After this, the PIFs will be used to normalize the multiple-date imagery.

    Other relative atmospheric correction methods include:

    Flat field calibration:

    This is used to normalize images to an area of known "flat" reflectance. This is

    particularly effective for reducing hyperspectral data to relative reflectance. The method

    requires that you select a Region of Interest (ROI) prior to execution. The average spectrum

    from the ROI is used as the reference spectrum, which is then divided into the spectrum at

    each pixel of the image.IAR (Internal Average Relative) Reflectance calibration:

  • 8/6/2019 Ogundele Oluwole

    12/21

    12 | P a g e

    This is used to normalize images to a scenes average spectrum. This is particularly

    effective for reducing hyperspectral data to relative reflectance in an area where no ground

    measurements exist and little is known about the scene. It works best for arid areas with no

    vegetation. An average spectrum is calculated from the entire scene and is used as the

    reference spectrum, which is then divided into the spectrum at each pixel of the image.

    Correcting topographic error

    Topographic slope and aspect also introduce radiometric distortion (for example,

    areas in shadow).The goal of a slope-aspect correction is to remove topographically induced

    illumination variation so that two objects having the same reflectance properties show the

    same brightness value (or DN) in the image despite their different orientation to the Suns

    position.

    Image acquisition geometry

    - Sun zenith angle is the angle of the Sun away from vertical.- Sun elevation angle is the angle of the Sun away from horizontal.- Sensor elevation angle is the angle away from horizontal.-

    Sensor azimuth angle and Sun azimuth are clockwise from the north

  • 8/6/2019 Ogundele Oluwole

    13/21

    13 | P a g e

    Cosine correction:

    LH= LT cos0

    cosi

    Minnaert correction:

    LH= LT cos0 k

    cosi

  • 8/6/2019 Ogundele Oluwole

    14/21

    14 | P a g e

    Statistical-empirical correction:

    LH= LT- m cos ib + LT

    C correction:

    LH= LT cos0 + c

    cosi + c

    Computing the cosine of the solar incidence angle

    cos (i) = sin()sin()cos(s) sin()cos()sin(s)cos() + cos()cos()cos(s)cos() +

    cos()sin()sin(s)cos()cos() + cos()sin()sin(s)sin()

    where

    = declination of the earth (positive in summer in northern hemisphere)

    = latitude of the pixel (positive for northern hemisphere)

    s = slope in radians, where s=0 is horizontal and s=/2 is vertical downward (s is always

    positive and represents a downward slope in any direction)

    = surface azimuth angle. is the deviation of the normal to the surface from the local

    meridian, where = 0 for aspect that is due south, = -for east and = + for western aspect. =

    -/2 represents an east-facing slope and = +/2 represents a west-facing slope. = -or =

    represents a north-facing slope.

    = hour angle. = 0 at solar noon, is negative in morning and is positive in afternoon

  • 8/6/2019 Ogundele Oluwole

    15/21

    15 | P a g e

    (a)Original image (b) Result of cosine correction (c) Result of Minnaert correction(d) Result of 2-stage normalization

    GEOMETRIC CORRECTIONGeometric error in the image arises from several sources. These include the curvature

    and rotation of the earth, the wide field of view and platform instability (both of which are

    bigger problems for airborne sensors than for satellite instruments), and panoramic effects of

    scanning instruments. Radar data is affected by the relationship between terrain slope and

    look angle. While the theory behind the correction of geometric distortions is usually

    straightforward, its implementation may not be. One problem is registering the

    image to a rectified grid. (The same problem arises when two or more images or maps from

    different sources are overlain, another common preliminary to data analysis). Polynomial

  • 8/6/2019 Ogundele Oluwole

    16/21

    16 | P a g e

    interpolation methods are satisfactory for single-band images and have been used for

    multispectral broadband data as well. For hyperspectral data, however, simpler nearest-

    neighbour resampling schemes may be preferred because these do not distort spectral

    characteristics which will be used for spectral matching and detailed identification of objects

    in the scene. Detailed geometric correction models are developed for instruments with high

    spatial resolution.

    Some of these errors can be corrected by using ephemeris of the platform and known

    internal sensor distortion characteristics. Other errors can only be corrected by matching

    image coordinates of physical features recorded by the image to the geographic coordinates

    of the same features collected from a map or global positioning system (GPS).

    Geometric errors that can be corrected using sensor characteristics and ephemeris data

    include scan skew, mirror-scan velocity variance, panoramic distortion, platform velocity,

    and perspective geometry.

    Geometric errors

    Remote sensing data is affected by geometric distortions due to sensor geometry,

    scanner and platform instabilities, earth rotation, earth curvature, scan skew, mirror scan

    velocity variance, panoramic distortion, platform velocity, perspective etc. A few of these

    errors shall be discussed.

    Scan skew:

    This is caused by the forward motion of the platform during the time required for each mirror

    sweep. The ground swath is not normal to the ground track but is slightly skewed, producing

    cross-scan geometric distortion.

    Mirror scan velocity variance:

    In this case, the mirror scanning rate is usually not constant across a given scan, producing

    along-scan geometric distortion.

  • 8/6/2019 Ogundele Oluwole

    17/21

    17 | P a g e

    Panoramic distortion:

    Here, the ground area imaged is proportional to the tangent of the scan angle rather than to

    the angle itself. Because data are sampled at regular intervals, this produces along-scan

    distortion

    Platform velocity:

    If the speed of the platform changes, the ground track covered by successive mirror scans

    changes, producing along -track scale distortion.

    Earth rotation:

    Earth rotates as the sensor scans the terrain. This results in a shift of the ground swath being

    scanned, causing along-scan distortion.

    Perspective:

    For some applications it is desirable to have images represent the projection of points on

    Earth on a plane tangent to Earth with all projection lines normal to the plan. This introduces

    along -scan distortion.

    Some of these distortions are corrected by the image supplier and others can be

    corrected by referencing the images to existing maps.

    Remotely sensed images in a raw format contains no reference to the location. In

    order to integrate these data with other data in a GIS, it is necessary to correct and adapt them

    geometrically so that they have comparable resolution and projections as the other data sets.

    The geometry of a satellite image can be distorted with respect to a North-South oriented

    map:

    - Heading of the satellite orbit at a given position on earth (rotation).- Change in resolution of the input image (scaling).- Difference in position of the image and map (shift).- Skew caused by earth rotation (shear).

    The different distortions of the image geometry are not realized in certain

  • 8/6/2019 Ogundele Oluwole

    18/21

    18 | P a g e

    sequence but happen altogether and therefore cannot be corrected stepwise. The correction of

    all distortions at once is executed by a transformation which combines all the separate

    corrections. The transformation most frequently used to correct satellite imagery is a first

    order transformation also called affine transformation. This transformation can be given by

    the following polynomials:

    X = a0 + a1rn + a2cn

    Y = b0 + b1rn + b2cn

    Where

    rn is row number

    cn is column number

    X and Y are map coordinates

    To define the transformation, it will be necessary to compute the coefficients of the

    polynomials e.g. a0, a1 and a2. For the computations, a number of points have to be selected

    that can be located accurately on the map (X, Y) and which are also identifiable in the image

    (row, column). The minimum number of points required for the computation of coefficients

    for an affine transform is three, but in practice you need more. By selecting more points than

    required, this additional data is used to get the optimal transformation with the smallest

    overall positional error in the selected points. These errors will appear because of poor

    positioning of the mouse pointer in an image and by inaccurate measurement of coordinates

    in a map. The overall accuracy of the transformation is indicated by the average of the errors

    in the reference points. The so-called Root Mean Square Error (RMSE) or Sigma. If the

    accuracy of the transformation is acceptable, then the transformation is linked with the image

    and a reference can be made for each pixel to the given coordinate system, so the image is

    geo-referenced. After geo-referencing, the image still has its original geometry and the pixels

    have their initial position in the image with respect to row and column indices.

    In case the image should be combined with data in another coordinate system or geo-

    reference, then a transformation has to be applied. This results in a new image where the

    pixels are stored in a new line/column geometry which is related to the other geo-reference

    (containing information on the coordinates and pixel size). This new image is created by

    methods of resampling, by applying an interpolation method. The interpolation method is

    used to compute the radiometric values of the pixels in the new image based on the DN

  • 8/6/2019 Ogundele Oluwole

    19/21

    19 | P a g e

    values in the original image. After this action, the new image is called geo-coded and it can

    be overlaid with data having the same coordinate system.

    In case of satellite imagery from optical systems, it is advised to use a linear

    transformation. Higher order transformations need much more computation time and in many

    cases they will enlarge errors. The reference points should be well distributed over the image

    to minimize the overall error. A good choice is a pattern where the points are along the

    borders of the image and a few in the center.

    Affine transformation

    This is a six parameter transformation with the following unknown parameters: a, b, c,

    d, e, f. Each transformation requires a minimum number of reference points (3 for affine, 6

    for second order and 9 for third order polynomials). If more points are selected, the residuals

    and the derived Root Mean Square Error (RMSE) or Sigma may be used to obtain the best

    estimates.

    x a b u e

    = +

    y c d v f

    It modifies the orthogonal type by using different scale factors in thex and y

    directions. It corrects for shrinkage by means of the scale factor, applies the translation to the

    shift of the origin and also performs rotation through angle (plus a small angular correction

    for non-orthogonality to orient the axes in the u, v photo system).

  • 8/6/2019 Ogundele Oluwole

    20/21

    20 | P a g e

    REFERENCES

    [1] Hogjie (2008). Lecture 4, Radiometric correction.Introduction To Remote Sensing

    [2] Medina, E. (2002). Lecture 6, GEOG371. Remote Sensing, Data Collection, Image

    Processing and Raster Data.

    [3] Olaleye, J.B. (2011). Lecture Note 11-1. Photogrammetry & RemoteSensing II, SVY517,

    Series In Geoinformatics, 39-49.

  • 8/6/2019 Ogundele Oluwole

    21/21

    21 | P a g e