erdas field guide

842
ERDAS Field Guide™ December 2010

Upload: sarunjith-kj

Post on 08-Sep-2014

430 views

Category:

Documents


13 download

TRANSCRIPT

Page 1: ERDAS Field Guide

ERDAS Field Guide™

December 2010

Page 2: ERDAS Field Guide

Copyright © 2010 ERDAS, Inc.

All rights reserved.

Printed in the United States of America.

The information contained in this document is the exclusive property of ERDAS, Inc. This work is protected under United States copyright law and other international copyright treaties and conventions. No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or by any information storage or retrieval system, except as expressly permitted in writing by ERDAS, Inc. All requests should be sent to the attention of:

Manager, Technical DocumentationERDAS, Inc.5051 Peachtree Corners CircleSuite 100Norcross, GA 30092-2500 USA.

The information contained in this document is subject to change without notice.

Government Reserved Rights. MrSID technology incorporated in the Software was developed in part through a project at the Los Alamos National Laboratory, funded by the U.S. Government, managed under contract by the University of California (University), and is under exclusive commercial license to LizardTech, Inc. It is used under license from LizardTech. MrSID is protected by U.S. Patent No. 5,710,835. Foreign patents pending. The U.S. Government and the University have reserved rights in MrSID technology, including without limitation: (a) The U.S. Government has a non-exclusive, nontransferable, irrevocable, paid-up license to practice or have practiced throughout the world, for or on behalf of the United States, inventions covered by U.S. Patent No. 5,710,835 and has other rights under 35 U.S.C. § 200-212 and applicable implementing regulations; (b) If LizardTech's rights in the MrSID Technology terminate during the term of this Agreement, you may continue to use the Software. Any provisions of this license which could reasonably be deemed to do so would then protect the University and/or the U.S. Government; and (c) The University has no obligation to furnish any know-how, technical assistance, or technical data to users of MrSID software and makes no warranty or representation as to the validity of U.S. Patent 5,710,835 nor that the MrSID Software will not infringe any patent or other proprietary right. For further information about these provisions, contact LizardTech, 1008 Western Ave., Suite 200, Seattle, WA 98104.

ERDAS, ERDAS IMAGINE, Stereo Analyst, IMAGINE Essentials, IMAGINE Advantage, IMAGINE, Professional, IMAGINE VirtualGIS, Mapcomposer, Viewfinder, and Imagizer are registered trademarks of ERDAS, Inc.

Other companies and products mentioned herein are trademarks or registered trademarks of their respective owners.

Page 3: ERDAS Field Guide

iiiTable of Contents

Table of Contents iii

Table of ContentsTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xixList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxviiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxixConventions Used in this Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix

Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Image Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3

Remote Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Absorption / Reflection Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6

Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14Spectral Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15Spatial Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15Radiometric Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16Temporal Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17

Data Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Line Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18Striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18

Data Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Storage Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19Storage Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22Calculating Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25ERDAS IMAGINE Format (.img) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25

Image File Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29Consistent Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29Keeping Track of Image Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30

Geocoded Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31Using Image Data in GIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

Subsetting and Mosaicking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32

Page 4: ERDAS Field Guide

iv Table of Contents

Multispectral Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33Editing Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

Editing Continuous (Athematic) Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34Interpolation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34

Image Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36Dynamic Range Run-Length Encoding (DR RLE) . . . . . . . . . . . . . . . . . . . . . . . . . .36ECW Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38

Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42Vertex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43Vector Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43

Attribute Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45Displaying Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

Color Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47Symbolization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47

Vector Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Tablet Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49Screen Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51

Imported Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Raster to Vector Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52Other Vector Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

Shapefile Vector Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52SDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53SDTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53ArcGIS Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54

Raster and Vector Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55Importing and Exporting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55

Page 5: ERDAS Field Guide

Table of Contents v

Raster Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62Annotation Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63Generic Binary Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .64

Optical Satellite Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66Satellite System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66Satellite Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67ALOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68ASTER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70EROS A and EROS B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71FORMOSAT-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71GeoEye-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72IKONOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .73IRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .73KOMPSAT 1-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75Landsat 1-5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76Landsat 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .80LPGS and NLAPS Processing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81NOAA Polar Orbiter Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .82OrbView-3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .84QuickBird . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85RapidEye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .86SeaWiFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87SPOT 1 -3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87SPOT 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89SPOT 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90WorldView-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90WorldView-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .91

Radar Satellite Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92Advantages of Using Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .93Radar Sensor Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .93Speckle Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96Applications for Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .97Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .97

Image Data from Aircraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107Aircraft Radar Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107Aircraft Optical Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .108

Image Data from Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

Page 6: ERDAS Field Guide

vi Table of Contents

Photogrammetric Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109Desktop Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109Aerial Photography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .110DOQs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .110

ADRG Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111ARC System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .111ADRG File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .112.OVR (overview) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .112.IMG (scanned image data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .113.Lxx (legend data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .113ADRG File Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115

ADRI Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116.OVR (overview) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118.IMG (scanned image data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118ADRI File Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118

Raster Product Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119CIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121CADRG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121

Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121DEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .122DTED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .123Using Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .123

GPS Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .124Satellite Position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .124Differential Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .125Applications of GPS Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .125

Ordering Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127Addresses to Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .128

Raster Data from Other Software Vendors . . . . . . . . . . . . . . . . . . . . . . . 131ERDAS Ver. 7.X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .131GRID and GRID Stacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132JFIF (JPEG) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133JPEG2000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133MrSID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .134SDTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .135SUN Raster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .135

Page 7: ERDAS Field Guide

Table of Contents vii

TIFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136GeoTIFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137

Vector Data from Other Software Vendors . . . . . . . . . . . . . . . . . . . . . . . 138ARCGEN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .139AutoCAD (DXF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .139DLG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141ETAK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141IGES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .142TIGER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .142

Image Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145Display Memory Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .145Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .146Colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .147Colormap and Colorcells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .147Display Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1498-bit PseudoColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15024-bit DirectColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15024-bit TrueColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .151PC Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .152

Displaying Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153Thematic Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .158

Using the Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .162Dithering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166Viewing Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .167Viewing Multiple Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .168Zoom and Roam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .169Geographic Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .170Enhancing Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .170Creating New Image Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .171

Geographic Information Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 173Information vs. Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .174

Data Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175Continuous Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177Thematic Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

Page 8: ERDAS Field Guide

viii Table of Contents

Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .180Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

Raster Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .181Vector Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .183

Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183ERDAS IMAGINE Analysis Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .183Analysis Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .185

Proximity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186Contiguity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186Neighborhood Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187Recoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190Overlaying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192Matrix Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194Graphical Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

Model Maker Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .198Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .199Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .200Output Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .201Using Attributes in Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .201

Script Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .205Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206

Vector Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206Editing Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206

Constructing Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207Building and Cleaning Coverages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .208

Cartography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211Types of Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

Thematic Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .213Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216Legends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220

Page 9: ERDAS Field Guide

Table of Contents ix

Neatlines, Tick Marks, and Grid Lines . . . . . . . . . . . . . . . . . . . . . . . . . . 221Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222Labels and Descriptive Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223

Typography and Lettering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .223Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226

Properties of Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .227Projection Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .229

Geographical and Planar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . 232Available Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232Choosing a Map Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240

Map Projection Uses in a GIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .240Deciding Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .240Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .240

Spheroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241Non-Earth Spheroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .246

Map Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246Learning Map Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .246Plan the Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .247

Map Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248US National Map Accuracy Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .248USGS Land Use and Land Cover Map Guidelines . . . . . . . . . . . . . . . . . . . . . . . .249USDA SCS Soils Maps Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .249Digitized Hardcopy Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .249

Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .251Georeferencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .252Latitude/Longitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .252Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .252

When to Rectify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253When to Georeference Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .254Disadvantages of Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .254Rectification Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .255

Ground Control Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255GCPs in ERDAS IMAGINE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .255Entering GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .256GCP Prediction and Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .257

Page 10: ERDAS Field Guide

x Table of Contents

Polynomial Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .259Nonlinear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .262Effects of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .264Minimum Number of GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .268

Rubber Sheeting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269Triangle-Based Finite Element Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .269Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .269Triangle-based rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .270Linear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .270Nonlinear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .270Check Point Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .271

RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271Residuals and RMS Error Per GCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .271Total RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .272Error Contribution by Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .273Tolerance of RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .273Evaluating RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .274

Resampling Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274Rectifying to Lat/Lon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .276Nearest Neighbor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .276Bilinear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .277Cubic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .281Bicubic Spline Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .284

Map-to-Map Coordinate Conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . 286Conversion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .287Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .287

Hardcopy Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289Printing Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289

Scaled Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .289Printing Large Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .289Scale and Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .290Map Scaling Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .291

Mechanics of Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294Halftone Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .294Continuous Tone Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .294

Page 11: ERDAS Field Guide

Table of Contents xi

Contrast and Color Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .295RGB to CMY Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .295

Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297USGS Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298Alaska Conformal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301Albers Conical Equal Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303Azimuthal Equidistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306Behrmann . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309Bonne . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311Cassini . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313Cylindrical Equal Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315Double Stereographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317Eckert I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319Eckert II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321Eckert III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323Eckert IV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325Eckert V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327Eckert VI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329EOSAT SOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331EPSG Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332Equidistant Conic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333Equidistant Cylindrical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335Equirectangular (Plate Carrée) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336Gall Stereographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338Gauss Kruger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339General Vertical Near-side Perspective . . . . . . . . . . . . . . . . . . . . . . . . . 340Geographic (Lat/Lon) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342Gnomonic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344Hammer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346Interrupted Goode Homolosine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348Interrupted Mollweide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350Krovak . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351Lambert Azimuthal Equal Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353Lambert Conformal Conic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356Lambert Conic Conformal (1 Standard Parallel) . . . . . . . . . . . . . . . . . . 359

Page 12: ERDAS Field Guide

xii Table of Contents

Loximuthal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363Miller Cylindrical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366MGRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368Modified Transverse Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370Mollweide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372New Zealand Map Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374Oblated Equal Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375Oblique Mercator (Hotine) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376Orthographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379Plate Carrée . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382Polar Stereographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383Polyconic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386Quartic Authalic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388Robinson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390RSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392Sinusoidal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393Space Oblique Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395Space Oblique Mercator (Formats A & B) . . . . . . . . . . . . . . . . . . . . . . . 397State Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398Stereographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408Stereographic (Extended) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411Transverse Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412Two Point Equidistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414UTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416Van der Grinten I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419Wagner IV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421Wagner VII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423Winkel I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425External Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427Bipolar Oblique Conic Conformal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429Cassini-Soldner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430Laborde Oblique Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432Minimum Error Conformal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433Modified Polyconic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434Modified Stereographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435

Page 13: ERDAS Field Guide

Table of Contents xiii

Mollweide Equal Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436Rectified Skew Orthomorphic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438Robinson Pseudocylindrical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439Southern Orientated Gauss Conformal . . . . . . . . . . . . . . . . . . . . . . . . . 440Swiss Cylindrical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441Winkel’s Tripel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442

Mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443Input Image Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444

Exclude Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .444Image Dodging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .444Color Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .446Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .448

Intersection Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449Set Overlap Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .449Automatically Generate Cutlines For Intersection . . . . . . . . . . . . . . . . . . . . . . . . .450Geometry-based Cutline Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .451

Output Image Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451Output Image Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .451Run Mosaic To Disc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .453

Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455Display vs. File Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .456Spatial Modeling Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .456

Correcting Data Anomalies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459Radiometric Correction: Visible/Infrared Imagery . . . . . . . . . . . . . . . . . . . . . . . . .460Atmospheric Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .461Geometric Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .462

Radiometric Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463Contrast Stretching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .464Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .469Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .473Brightness Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .474

Spatial Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474Convolution Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .475Crisp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .479Resolution Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .480

Page 14: ERDAS Field Guide

xiv Table of Contents

Adaptive Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .482Wavelet Resolution Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483

Wavelet Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .484Algorithm Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .487Prerequisites and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .489Spectral Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .490

Spectral Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491Principal Components Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .492Decorrelation Stretch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .496Tasseled Cap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .496RGB to IHS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .498IHS to RGB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .500Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .501

Hyperspectral Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504Independent Component Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504

Component Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .505Band Generation for Multispectral Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .508Remote Sensing Applications for ICs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .508Tips and Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .510

Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .513Fourier Magnitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .513IFFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .516Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .517Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .520Fourier Noise Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .522Homomorphic Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .523

Radar Imagery Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525Speckle Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .526Edge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .532Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .536Radiometric Correction: Radar Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .539Merging Radar with VIS/IR Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .541

Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545The Classification Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545

Pattern Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .545

Page 15: ERDAS Field Guide

Table of Contents xv

Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .545Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .546Decision Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .547Output File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .547

Classification Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548Classification Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .548Iterative Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .548Supervised vs. Unsupervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .549Classifying Enhanced Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .549Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .549

Supervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550Training Samples and Feature Space Objects . . . . . . . . . . . . . . . . . . . . . . . . . . .551

Selecting Training Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551Evaluating Training Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .554

Selecting Feature Space Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554Unsupervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557

ISODATA Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .558RGB Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .562

Signature Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564Evaluating Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565

Alarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .566Ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .567Contingency Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .568Separability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .569Signature Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .572

Classification Decision Rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573Nonparametric Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .574Parametric Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .574Parallelepiped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .575Feature Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .578Minimum Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .580Mahalanobis Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .581Maximum Likelihood/Bayesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .582

Fuzzy Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584Fuzzy Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .584Fuzzy Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .584

Expert Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585

Page 16: ERDAS Field Guide

xvi Table of Contents

Knowledge Engineer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .586Knowledge Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .588

Evaluating Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .589Accuracy Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .592

Photogrammetric Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595What is Photogrammetry? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .595Types of Photographs and Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .596Why use Photogrammetry? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .597Photogrammetry/ Conventional Geometric Correction . . . . . . . . . . . . . . . . . . . . .597Single Frame Orthorectification/Block Triangulation . . . . . . . . . . . . . . . . . . . . . . .598

Image and Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600Photogrammetric Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .601Desktop Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .601Scanning Resolutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .602Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .603Terrestrial Photography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .606

Interior Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607Principal Point and Focal Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .608Fiducial Marks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .608Lens Distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .610

Exterior Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611The Collinearity Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .613

Photogrammetric Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614Space Resection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .615Space Forward Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .615Bundle Block Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .616Least Squares Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .619Self-calibrating Bundle Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .622Automatic Gross Error Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .622

GCPs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623GCP Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .624Processing Multiple Strips of Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .625

Tie Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626Automatic Tie Point Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .627

Image Matching Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628

Page 17: ERDAS Field Guide

Table of Contents xvii

Area Based Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .628Feature Based Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .631Relation Based Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .631Image Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .631

Satellite Photogrammetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633SPOT Interior Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .635SPOT Exterior Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .636Collinearity Equations & Satellite Block Triangulation . . . . . . . . . . . . . . . . . . . . . .640

Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641

Terrain Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645Terrain Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646Slope Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647Aspect Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650Shaded Relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652Topographic Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653

Lambertian Reflectance Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .654Non-Lambertian Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .654

Radar Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657IMAGINE OrthoRadar Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657

Parameters Required for Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .657Algorithm Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .660

IMAGINE StereoSAR DEM Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .667Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668Subset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .671Despeckle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .672Degrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .672Coregister . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .673Match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .673Degrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .678Height . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .679

IMAGINE InSAR Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .679Electromagnetic Wave Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .680The Interferometric Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .682Image Coregistration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .687

Page 18: ERDAS Field Guide

xviii Table of Contents

Phase Noise Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .690Phase Flattening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .692Phase Unwrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .692Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .696

Math Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697Summation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698

Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .698Bin Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .698Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .700Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .701Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .702Standard Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .703Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .704Covariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .704Covariance Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .705

Dimensionality of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706Measurement Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .706Mean Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .707Feature Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .708Feature Space Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .708n-Dimensional Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .709Spectral Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .710

Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .710Transformation Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .711

Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .712Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .713Transposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .714

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777

Works Cited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777Related Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 792

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795

Page 19: ERDAS Field Guide

xixList of Figures

List of Figures xix

List of FiguresFigure 1: Pixels and Bands in a Raster Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Figure 2: Typical File Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Figure 3: Electromagnetic Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Figure 4: Sun Illumination Spectral Irradiance at the Earth’s Surface . . . . . . . . . . . 7Figure 5: Factors Affecting Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8Figure 6: Reflectance Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Figure 7: Laboratory Spectra of Clay Minerals in the Infrared Region . . . . . . . . . . 12Figure 8: IFOV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Figure 9: Brightness Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Figure 10: Landsat TM—Band 2 (Four Types of Resolution) . . . . . . . . . . . . . . . . . 17Figure 11: Band Interleaved by Line (BIL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Figure 12: Band Sequential (BSQ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Figure 13: Image Files Store Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26Figure 14: Example of a Thematic Raster Layer . . . . . . . . . . . . . . . . . . . . . . . . . . 27Figure 15: Examples of Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . 28Figure 16: Vector Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41Figure 17: Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42Figure 18: Workspace Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45Figure 19: Attribute CellArray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46Figure 20: Symbolization Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48Figure 21: Digitizing Tablet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49Figure 22: Raster Format Converted to Vector Format . . . . . . . . . . . . . . . . . . . . 52Figure 23: Multispectral Imagery Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . 68Figure 24: Landsat MSS vs. Landsat TM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79Figure 25: SPOT Panchromatic vs. SPOT XS . . . . . . . . . . . . . . . . . . . . . . . . . . . 89Figure 26: SLAR Radar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94Figure 27: Received Radar Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94Figure 28: Radar Reflection from Different Sources and Distances . . . . . . . . . . . 95Figure 29: ADRG Overview File Displayed in a Viewer . . . . . . . . . . . . . . . . . . . . 112Figure 30: Subset Area with Overlapping ZDRs . . . . . . . . . . . . . . . . . . . . . . . . 113Figure 31: Seamless Nine Image DR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117Figure 32: ADRI Overview File Displayed in a Viewer. . . . . . . . . . . . . . . . . . . . . 118Figure 33: Arc/second Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122Figure 34: Common Uses of GPS Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127Figure 35: Example of One Seat with One Display and Two Screens . . . . . . . . . 145Figure 36: Transforming Data File Values to a Colorcell Value . . . . . . . . . . . . . 150

Page 20: ERDAS Field Guide

xx List of Figures

Figure 37: Transforming Data File Values to a Colorcell Value . . . . . . . . . . . . . 151Figure 38: Transforming Data File Values to Screen Values . . . . . . . . . . . . . . . . 152Figure 39: Contrast Stretch and Colorcell Values . . . . . . . . . . . . . . . . . . . . . . . . 155Figure 40: Stretching by Min/Max vs. Standard Deviation . . . . . . . . . . . . . . . . . 156Figure 41: Continuous Raster Layer Display Process . . . . . . . . . . . . . . . . . . . . 157Figure 42: Thematic Raster Layer Display Process . . . . . . . . . . . . . . . . . . . . . . 160Figure 43: Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165Figure 44: Example of Dithering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166Figure 45: Example of Color Patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167Figure 46: Data Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176Figure 47: Raster Attributes for lnlandc.img . . . . . . . . . . . . . . . . . . . . . . . . . . . 181Figure 48: Vector Attributes CellArray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183Figure 49: Proximity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186Figure 50: Contiguity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187Figure 51: Using a Mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188Figure 52: Sum Option of Neighborhood Analysis (Image Interpreter) . . . . . . . . . 190Figure 53: Overlay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192Figure 54: Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193Figure 55: Graphical Model for Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . 196Figure 56: Graphical Model Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197Figure 57: Modeling Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200Figure 58: Graphical and Script Models For Tasseled Cap Transformation . . . . 204Figure 59: Layer Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210Figure 60: Sample Scale Bars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217Figure 61: Sample Legend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220Figure 62: Sample Neatline, Tick Marks, and Grid Lines. . . . . . . . . . . . . . . . . . . 221Figure 63: Sample Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222Figure 64: Sample Sans Serif and Serif Typefaces with Various Styles Applied . . 225Figure 65: Good Lettering vs. Bad Lettering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226Figure 66: Projection Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229Figure 67: Tangent and Secant Cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230Figure 68: Tangent and Secant Cylinders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231Figure 69: Ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241Figure 70: Polynomial Curve vs. GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259Figure 71: Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261Figure 72: Nonlinear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262Figure 73: Transformation Example—1st-Order . . . . . . . . . . . . . . . . . . . . . . . . . 265Figure 74: Transformation Example—2nd GCP Changed . . . . . . . . . . . . . . . . . . 265

Page 21: ERDAS Field Guide

List of Figures xxi

Figure 75: Transformation Example—2nd-Order . . . . . . . . . . . . . . . . . . . . . . . . 266Figure 76: Transformation Example—4th GCP Added . . . . . . . . . . . . . . . . . . . . 266Figure 77: Transformation Example—3rd-Order . . . . . . . . . . . . . . . . . . . . . . . . . 267Figure 78: Transformation Example—Effect of a 3rd-Order Transformation . . . . 267Figure 79: Triangle Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269Figure 80: Residuals and RMS Error Per Point . . . . . . . . . . . . . . . . . . . . . . . . . 272Figure 81: RMS Error Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273Figure 82: Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275Figure 83: Nearest Neighbor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276Figure 84: Bilinear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278Figure 85: Linear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278Figure 86: Cubic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282Figure 87: Layout for a Book Map and a Paneled Map . . . . . . . . . . . . . . . . . . . 290Figure 88: Sample Map Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292Figure 89: Albers Conical Equal Area Projection . . . . . . . . . . . . . . . . . . . . . . . . 305Figure 90: Polar Aspect of the Azimuthal Equidistant Projection . . . . . . . . . . . . . 308Figure 91: Behrmann Cylindrical Equal-Area Projection . . . . . . . . . . . . . . . . . . . 310Figure 92: Bonne Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312Figure 93: Cassini Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314Figure 94: Cylindrical Equal-Area Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . 316Figure 95: Eckert I Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320Figure 96: Eckert II Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322Figure 97: Eckert III Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324Figure 98: Eckert IV Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326Figure 99: Eckert V Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328Figure 100: Eckert VI Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330Figure 101: Equidistant Conic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334Figure 102: Equirectangular Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337Figure 103: Geographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343Figure 104: Hammer Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347Figure 105: Interrupted Goode Homolosine Projection . . . . . . . . . . . . . . . . . . . . 349Figure 106: Interrupted Mollweide Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . 350Figure 107: Lambert Azimuthal Equal Area Projection . . . . . . . . . . . . . . . . . . . . 355Figure 108: Lambert Conformal Conic Projection . . . . . . . . . . . . . . . . . . . . . . . . 358Figure 109: Loximuthal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362Figure 110: Mercator Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365Figure 111: Miller Cylindrical Projection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367Figure 112: MGRS Grid Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368

Page 22: ERDAS Field Guide

xxii List of Figures

Figure 113: Mollweide Projection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373Figure 114: Oblique Mercator Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378Figure 115: Orthographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381Figure 116: Plate Carrée Projection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382Figure 117: Polar Stereographic Projection and its Geometric Construction . . . . 385Figure 118: Polyconic Projection of North America . . . . . . . . . . . . . . . . . . . . . . . 387Figure 119: Quartic Authalic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389Figure 120: Robinson Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391Figure 121: Sinusoidal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394Figure 122: Space Oblique Mercator Projection . . . . . . . . . . . . . . . . . . . . . . . . . 396Figure 123: Zones of the State Plane Coordinate System . . . . . . . . . . . . . . . . . . 399Figure 124: Stereographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410Figure 125: Two Point Equidistant Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . 415Figure 126: Zones of the Universal Transverse Mercator Grid in the United States . .

417Figure 127: Van der Grinten I Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420Figure 128: Wagner IV Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422Figure 129: Wagner VII Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424Figure 130: Winkel I Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426Figure 131: Winkel’s Tripel Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442Figure 132: Histograms of Radiometrically Enhanced Data . . . . . . . . . . . . . . . . . 463Figure 133: Graph of a Lookup Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464Figure 134: Enhancement with Lookup Tables . . . . . . . . . . . . . . . . . . . . . . . . . . 465Figure 135: Nonlinear Radiometric Enhancement . . . . . . . . . . . . . . . . . . . . . . . . 466Figure 136: Piecewise Linear Contrast Stretch . . . . . . . . . . . . . . . . . . . . . . . . . 467Figure 137: Contrast Stretch Using Lookup Tables, and Effect on Histogram . . . 469Figure 138: Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470Figure 139: Histogram Equalization Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 471Figure 140: Equalized Histogram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472Figure 141: Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473Figure 142: Spatial Frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475Figure 143: Applying a Convolution Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476Figure 144: Output Values for Convolution Kernel . . . . . . . . . . . . . . . . . . . . . . . 477Figure 145: Local Luminance Intercept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483Figure 146: Schematic Diagram of the Discrete Wavelet Transform - DWT . . . . . 486Figure 147: Inverse Discrete Wavelet Transform - DWT-1 . . . . . . . . . . . . . . . . . 487Figure 148: Wavelet Resolution Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488Figure 149: Two Band Scatterplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492

Page 23: ERDAS Field Guide

List of Figures xxiii

Figure 150: First Principal Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493Figure 151: Range of First Principal Component . . . . . . . . . . . . . . . . . . . . . . . . 493Figure 152: Second Principal Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494Figure 153: Intensity, Hue, and Saturation Color Coordinate System . . . . . . . . . 499Figure 154: One-Dimensional Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 512Figure 155: Example of Fourier Magnitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514Figure 156: The Padding Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516Figure 157: Comparison of Direct and Fourier Domain Processing . . . . . . . . . . . 518Figure 158: An Ideal Cross Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520Figure 159: High-Pass Filtering Using the Ideal Window. . . . . . . . . . . . . . . . . . . 521Figure 160: Filtering Using the Bartlett Window . . . . . . . . . . . . . . . . . . . . . . . . . 521Figure 161: Filtering Using the Butterworth Window . . . . . . . . . . . . . . . . . . . . . . 522Figure 162: Homomorphic Filtering Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 524Figure 163: Effects of Mean and Median Filters . . . . . . . . . . . . . . . . . . . . . . . . 527Figure 164: Regions of Local Region Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528Figure 165: One-dimensional, Continuous Edge, and Line Models . . . . . . . . . . . 533Figure 166: A Noisy Edge Superimposed on an Ideal Edge . . . . . . . . . . . . . . . . 533Figure 167: Edge and Line Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534Figure 168: Adjust Brightness Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540Figure 169: Range Lines vs. Lines of Constant Range . . . . . . . . . . . . . . . . . . . . 541Figure 170: Example of a Feature Space Image . . . . . . . . . . . . . . . . . . . . . . . . 554Figure 171: Process for Defining a Feature Space Object . . . . . . . . . . . . . . . . . 556Figure 172: ISODATA Arbitrary Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559Figure 173: ISODATA First Pass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560Figure 174: ISODATA Second Pass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560Figure 175: RGB Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563Figure 176: Ellipse Evaluation of Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . 568Figure 177: Classification Flow Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575Figure 178: Parallelepiped Classification With Two Standard Deviations as Limits576Figure 179: Parallelepiped Corners Compared to the Signature Ellipse . . . . . . . . 578Figure 180: Feature Space Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578Figure 181: Minimum Spectral Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580Figure 182: Knowledge Engineer Editing Window . . . . . . . . . . . . . . . . . . . . . . . 586Figure 183: Example of a Decision Tree Branch. . . . . . . . . . . . . . . . . . . . . . . . . 587Figure 184: Split Rule Decision Tree Branch . . . . . . . . . . . . . . . . . . . . . . . . . . . 587Figure 185: Knowledge Classifier Classes of Interest . . . . . . . . . . . . . . . . . . . . . 588Figure 186: Histogram of a Distance Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590Figure 187: Interactive Thresholding Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591

Page 24: ERDAS Field Guide

xxiv List of Figures

Figure 188: Exposure Stations Along a Flight Path . . . . . . . . . . . . . . . . . . . . . . 600Figure 189: A Regular Rectangular Block of Aerial Photos . . . . . . . . . . . . . . . . 601Figure 190: Pixel Coordinates and Image Coordinates . . . . . . . . . . . . . . . . . . . . 604Figure 191: Image Space and Ground Space Coordinate System . . . . . . . . . . . . 605Figure 192: Terrestrial Photography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606Figure 193: Internal Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607Figure 194: Pixel Coordinate System vs. Image Space Coordinate System . . . . . 609Figure 195: Radial vs. Tangential Lens Distortion . . . . . . . . . . . . . . . . . . . . . . . 610Figure 196: Elements of Exterior Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . 612Figure 197: Space Forward Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616Figure 198: Photogrammetric Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617Figure 199: GCP Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625Figure 200: GCPs in a Block of Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625Figure 201: Point Distribution for Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . 626Figure 202: Tie Points in a Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627Figure 203: Image Pyramid for Matching at Coarse to Full Resolution . . . . . . . . . 632Figure 204: Perspective Centers of SPOT Scan Lines . . . . . . . . . . . . . . . . . . . . 634Figure 205: Image Coordinates in a Satellite Scene . . . . . . . . . . . . . . . . . . . . . . 635Figure 206: Interior Orientation of a SPOT Scene . . . . . . . . . . . . . . . . . . . . . . . 636Figure 207: Inclination of a Satellite Stereo-Scene (View from North to South) . . 638Figure 208: Velocity Vector and Orientation Angle of a Single Scene . . . . . . . . . 639Figure 209: Ideal Point Distribution Over a Satellite Scene for Triangulation . . . . 641Figure 210: Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642Figure 211: Digital Orthophoto—Finding Gray Values . . . . . . . . . . . . . . . . . . . . 642Figure 212: Regularly Spaced Terrain Data Points . . . . . . . . . . . . . . . . . . . . . . 646Figure 213: 3 × 3 Window Calculates the Slope at Each Pixel . . . . . . . . . . . . . . 648Figure 214: Slope Calculation Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649Figure 215: 3 × 3 Window Calculates the Aspect at Each Pixel . . . . . . . . . . . . . . 650Figure 216: Aspect Calculation Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651Figure 217: Shaded Relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652Figure 218: Doppler Cone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664Figure 219: Sparse Mapping and Output Grids . . . . . . . . . . . . . . . . . . . . . . . . . . 665Figure 220: Magnitude and Phase Data as shown in the complex plane . . . . . . . 666Figure 221: IMAGINE StereoSAR DEM Process Flow . . . . . . . . . . . . . . . . . . . . . 668Figure 222: SAR Image Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669Figure 223: UL Corner of the Reference Image . . . . . . . . . . . . . . . . . . . . . . . . . 674Figure 224: UL Corner of the Match Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674Figure 225: Image Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675

Page 25: ERDAS Field Guide

List of Figures xxv

Figure 226: Electromagnetic Wave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680Figure 227: Variation of Electric Field in Time . . . . . . . . . . . . . . . . . . . . . . . . . . 681Figure 228: Effect of Time and Distance on Energy . . . . . . . . . . . . . . . . . . . . . . 682Figure 229: Geometric Model for an Interferometric SAR System . . . . . . . . . . . . 683Figure 230: Differential Collection Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . 686Figure 231: Interferometric Phase Image without Filtering . . . . . . . . . . . . . . . . . 691Figure 232: Interferometric Phase Image with Filtering . . . . . . . . . . . . . . . . . . . . 691Figure 233: Interferometric Phase Image without Phase Flattening . . . . . . . . . . . 692Figure 234: Electromagnetic Wave Traveling through Space . . . . . . . . . . . . . . . 693Figure 235: One-dimensional Continuous vs. Wrapped Phase Function . . . . . . . 694Figure 236: Sequence of Unwrapped Phase Images . . . . . . . . . . . . . . . . . . . . . 695Figure 237: Wrapped vs. Unwrapped Phase Images . . . . . . . . . . . . . . . . . . . . . 696Figure 238: Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698Figure 239: Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701Figure 240: Measurement Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706Figure 241: Mean Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707Figure 242: Two Band Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708Figure 243: Two-band Scatterplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709

Page 26: ERDAS Field Guide

xxvi List of Figures

Page 27: ERDAS Field Guide

xxviiList of Tables

List of Tables xxvii

List of TablesDescription of File Types 44Raster Data Formats 56Annotation Data Formats 63Vector Data Formats 64AVNIR-2 Sensor Characteristics 69PRISM Sensor Characteristics 70ASTER Characteristics 70EROS A - EROS B Characteristics 71FORMOSAT-2 Characteristics 72GeoEye-1 Characteristics 72KOMPSAT-1 and KOMPSAT-2 Characteristics 75AVHRR Data Characteristics 83QuickBird Characteristics 85RapidEye Characteristics 86WorldView-1 Characteristics 91WorldView-2 Characteristics 92Commonly Used Bands for Radar Imaging 95PALSAR Sensor Characteristics 98COSMO-SkyMed Imaging Characteristics 98JERS-1 Bands and Wavelengths 102RADARSAT Beam Mode Resolution 103RADARSAT-2 Characteristics 104SIR-C/X-SAR Bands and Frequencies 106TerraSAR-X Imaging Characteristics 106Daedalus TMS Bands and Wavelengths 108ARC System Chart Types 114Legend Files for the ARC System Chart Types 115Common Raster Data Products 127File Types Created by Screendump 136Common TIFF Format Elements 136Conversion of DXF Entries 140Conversion of IGES Entities 142Colorcell Example 148Commonly Used RGB Colors 159Example of a Recoded Land Cover Layer 190Model Maker Functions 198General Editing Operations and Supporting Feature Types 207Comparison of Building and Cleaning Coverages 208Common Map Scales 217Pixels per Inch 218Acres and Hectares per Pixel 219Map Projections 237Projection Parameters 238Earth Spheroids for use with ERDAS IMAGINE 243

Page 28: ERDAS Field Guide

xxviii List of Tables

Non-Earth Spheroids for use with ERDAS IMAGINE 246NAD27 State Plane Coordinate System for the United States 399NAD83 State Plane Coordinate System for the United States 404UTM Zones, Central Meridians, and Longitude Ranges 417Description of Modeling Functions Available for Enhancement 457Theoretical Coefficient of Variation Values 529Training Sample Comparison 553Scanning Resolutions 602SAR Parameters Required for Georeferencing 657STD_LP_HD Correlator 675

Page 29: ERDAS Field Guide

xxixPreface

Preface xxix

Preface

Introduction The purpose of the ERDAS Field Guide™ is to provide background information on why one might use particular geographic information system (GIS) and image processing functions and how the software is manipulating the data, rather than what buttons to push to actually perform those functions. This book is also aimed at a diverse audience: from those who are new to geoprocessing to those savvy users who have been in this industry for years. For the novice, the ERDAS Field Guide provides a brief history of the field, an extensive glossary of terms, and notes about applications for the different processes described. For the experienced user, the ERDAS Field Guide includes the formulas and algorithms that are used in the code, so that he or she can see exactly how each operation works. Although the ERDAS Field Guide is primarily a reference to basic image processing and GIS concepts, it is geared toward ERDAS IMAGINE® users and the functions within ERDAS IMAGINE software, such as GIS analysis, image processing, cartography and map projections, graphics display hardware, statistics, and remote sensing. However, in some cases, processes and functions are described that may not be in the current version of the software, but planned for a future release. There may also be functions described that are not available on your system, due to the actual package that you are using.The enthusiasm with which the first four editions of the ERDAS Field Guide were received has been extremely gratifying, both to the authors and to Leica Geosystems GIS & Mapping, LLC as a whole. First conceived as a helpful manual for users, the ERDAS Field Guide is now being used as a textbook, lab manual, and training guide throughout the world. The ERDAS Field Guide will continue to expand and improve to keep pace with the profession. Suggestions and ideas for future editions are always welcome, and should be addressed to the Technical Writing department of Engineering at Leica Geosystems, in Norcross, Georgia.

Conventions Used in this Book

The following paragraphs are used throughout the ERDAS Field Guide and other ERDAS IMAGINE documentation.

These paragraphs contain strong warnings or important tips.

These paragraphs lead you to other chapters in the ERDAS Field Guide or other manuals for additional information.

Page 30: ERDAS Field Guide

xxx Preface

These paragraphs give you additional information.

These paragraphs provide software version specific information.

NOTE: Notes give additional instruction

Page 31: ERDAS Field Guide

1Raster Data

Raster Data 1

Raster Data

Introduction The ERDAS IMAGINE system incorporates the functions of both image processing and GIS. These functions include importing, viewing, altering, and analyzing raster and vector data sets. This chapter is an introduction to raster data, including:

• remote sensing

• data storage formats

• different types of resolution

• radiometric correction

• geocoded data

• raster data in GIS

See "Vector Data" on page 41 for more information on vector data.

Image Data In general terms, an image is a digital picture or representation of an object. Remotely sensed image data are digital representations of the Earth. Image data are stored in data files, also called image files, on magnetic tapes, computer disks, or other media. The data consist only of numbers. These representations form images when they are displayed on a screen or are output to hardcopy. Each number in an image file is a data file value. Data file values are sometimes referred to as pixels. The term pixel is abbreviated from picture element. A pixel is the smallest part of a picture (the area being scanned) with a single value. The data file value is the measured brightness value of the pixel at a specific wavelength. Raster image data are laid out in a grid similar to the squares on a checkerboard. Each cell of the grid is represented by a pixel, also known as a grid cell. In remotely sensed image data, each pixel represents an area of the Earth at a specific location. The data file value assigned to that pixel is the record of reflected radiation or emitted heat from the Earth’s surface at that location.Data file values may also represent elevation, as in digital elevation models (DEMs).

Page 32: ERDAS Field Guide

2 Raster Data

The terms pixel and data file value are not interchangeable in ERDAS IMAGINE. Pixel is used as a broad term with many meanings, one of which is data file value. One pixel in a file may consist of many data file values. When an image is displayed or printed, other types of values are represented by a pixel.

See "Image Display" on page 145 for more information on how images are displayed.

Bands Image data may include several bands of information. Each band is a set of data file values for a specific portion of the electromagnetic spectrum of reflected light or emitted heat (red, green, blue, near-infrared, infrared, thermal, and so forth) or some other user-defined information created by combining or enhancing the original bands, or creating new bands from other sources. ERDAS IMAGINE programs can handle an unlimited number of bands of image data in a single file.

Figure 1: Pixels and Bands in a Raster Image

See "Enhancement" on page 455 for more information on combining or enhancing bands of data.

Bands vs. Layers In ERDAS IMAGINE, bands of data are occasionally referred to as layers. Once a band is imported into a GIS, it becomes a layer of information which can be processed in various ways. Additional layers can be created and added to the image file (.img extension) in ERDAS IMAGINE, such as layers created by combining existing layers.

Read more about image files in ERDAS IMAGINE Format (.img) on page 25.

1 pixel

3 bands

Page 33: ERDAS Field Guide

Raster Data 3

Layers vs. Viewer LayersThe Viewer permits several images to be layered, in which case each image (including a multiband image) may be a layer.

Numeral TypesThe range and the type of numbers used in a raster layer determine how the layer is displayed and processed. For example, a layer of elevation data with values ranging from -51.257 to 553.401 would be treated differently from a layer using only two values to show land and water. The data file values in raster layers generally fall into these categories:

• Nominal data file values are simply categorized and named. The actual value used for each category has no inherent meaning—it is simply a class value. An example of a nominal raster layer would be a thematic layer showing tree species.

• Ordinal data are similar to nominal data, except that the file values put the classes in a rank or order. For example, a layer with classes numbered and named:

1 - Good, 2 - Moderate, and 3 - Poor is an ordinal system.

• Interval data file values have an order, but the intervals between the values are also meaningful. Interval data measure some characteristic, such as elevation or degrees Fahrenheit, which does not necessarily have an absolute zero. (The difference between two values in interval data is meaningful.)

• Ratio data measure a condition that has a natural zero, such as electromagnetic radiation (as in most remotely sensed data), rainfall, or slope.

Nominal and ordinal data lend themselves to applications in which categories, or themes, are used. Therefore, these layers are sometimes called categorical or thematic. Likewise, interval and ratio layers are more likely to measure a condition, causing the file values to represent continuous gradations across the layer. Such layers are called continuous.

Coordinate Systems The location of a pixel in a file or on a displayed or printed image is expressed using a coordinate system. In two-dimensional coordinate systems, locations are organized in a grid of columns and rows. Each location on the grid is expressed as a pair of coordinates known as X and Y. The X coordinate specifies the column of the grid, and the Y coordinate specifies the row. Image data organized into such a grid are known as raster data.There are two basic coordinate systems used in ERDAS IMAGINE:

Page 34: ERDAS Field Guide

4 Raster Data

• file coordinates—indicate the location of a pixel within the image (data file)

• map coordinates—indicate the location of a pixel in a map

File Coordinates File coordinates refer to the location of the pixels within the image (data) file. File coordinates for the pixel in the upper left corner of the image always begin at 0, 0.

Figure 2: Typical File Coordinates

Map CoordinatesMap coordinates may be expressed in one of a number of map coordinate or projection systems. The type of map coordinates used by a data file depends on the method used to create the file (remote sensing, scanning an existing map, and so forth). In ERDAS IMAGINE, a data file can be converted from one map coordinate system to another.

For more information on map coordinates and projection systems, see "Cartography" on page 211 or "Map Projections" on page 297. See "Rectification" on page 251 for more information on changing the map coordinate system of a data file.

Remote Sensing Remote sensing is the acquisition of data about an object or scene by a sensor that is far from the object (Colwell, 1983). Aerial photography, satellite imagery, and radar are all forms of remotely sensed data. Usually, remotely sensed data refer to data of the Earth collected from sensors on satellites or aircraft. Most of the images used as input to the ERDAS IMAGINE system are remotely sensed. However, you are not limited to remotely sensed data.

rows (y)(3,1)x,y

columns (x)

0

1

2

3

0 1 2 3 4

Page 35: ERDAS Field Guide

Raster Data 5

This section is a brief introduction to remote sensing. There are many books available for more detailed information, including Colwell, 1983, Swain and Davis, 1978; and Slater, 1980 (see "Bibliography" on page 777).

Electromagnetic Radiation SpectrumThe sensors on remote sensing platforms usually record electromagnetic radiation. Electromagnetic radiation (EMR) is energy transmitted through space in the form of electric and magnetic waves (Star and Estes, 1990). Remote sensors are made up of detectors that record specific wavelengths of the electromagnetic spectrum. The electromagnetic spectrum is the range of electromagnetic radiation extending from cosmic waves to radio waves ("Jensen, 1996"). All types of land cover (rock types, water bodies, and so forth) absorb a portion of the electromagnetic spectrum, giving a distinguishable signature of electromagnetic radiation. Armed with the knowledge of which wavelengths are absorbed by certain features and the intensity of the reflectance, you can analyze a remotely sensed image and make fairly accurate assumptions about the scene. Figure 3: illustrates the electromagnetic spectrum (Suits, 1983; Star and Estes, 1990).

Figure 3: Electromagnetic Spectrum

SWIR and LWIRThe near-infrared and middle-infrared regions of the electromagnetic spectrum are sometimes referred to as the short wave infrared region (SWIR). This is to distinguish this area from the thermal or far infrared region, which is often referred to as the long wave infrared region (LWIR). The SWIR is characterized by reflected radiation whereas the LWIR is characterized by emitted radiation.

micrometers μm (one millionth of a meter)

0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0 17.0

Reflected Thermal

SWIR LWIR

Visible(0.4 - 0.7)

Blue (0.4 - 0.5)Green (0.5 - 0.6)Red (0.6 - 0.7)

Near-infrared(0.7 - 2.0)

Ultraviolet

Middle-infrared(2.0 - 5.0)

Far-infrared(8.0 - 15.0)

Radar

Page 36: ERDAS Field Guide

6 Raster Data

Absorption / Reflection Spectra

When radiation interacts with matter, some wavelengths are absorbed and others are reflected.To enhance features in image data, it is necessary to understand how vegetation, soils, water, and other land covers reflect and absorb radiation. The study of the absorption and reflection of EMR waves is called spectroscopy.

SpectroscopyMost commercial sensors, with the exception of imaging radar sensors, are passive solar imaging sensors. Passive solar imaging sensors can only receive radiation waves; they cannot transmit radiation. (Imaging radar sensors are active sensors that emit a burst of microwave radiation and receive the backscattered radiation.) The use of passive solar imaging sensors to characterize or identify a material of interest is based on the principles of spectroscopy. Therefore, to fully utilize a visible/infrared (VIS/IR) multispectral data set and properly apply enhancement algorithms, it is necessary to understand these basic principles. Spectroscopy reveals the:

• absorption spectra—the EMR wavelengths that are absorbed by specific materials of interest

• reflection spectra—the EMR wavelengths that are reflected by specific materials of interest

Absorption SpectraAbsorption is based on the molecular bonds in the (surface) material. Which wavelengths are absorbed depends upon the chemical composition and crystalline structure of the material. For pure compounds, these absorption bands are so specific that the SWIR region is often called an infrared fingerprint.

Atmospheric AbsorptionIn remote sensing, the sun is the radiation source for passive sensors. However, the sun does not emit the same amount of radiation at all wavelengths. Figure 4 shows the solar irradiation curve, which is far from linear.

Page 37: ERDAS Field Guide

Raster Data 7

Figure 4: Sun Illumination Spectral Irradiance at the Earth’s Surface

Source: Modified from Chahine et al, 1983Solar radiation must travel through the Earth’s atmosphere before it reaches the Earth’s surface. As it travels through the atmosphere, radiation is affected by four phenomena (Elachi, 1987):

• absorption—the amount of radiation absorbed by the atmosphere

• scattering—the amount of radiation scattered away from the field of view by the atmosphere

• scattering source—divergent solar irradiation scattered into the field of view

• emission source—radiation re-emitted after absorption

00.0

Spe

ctra

l Irr

adia

nce

(Wm

-2 m

-1)

3.01.5 2.72.42.11.81.20.90.60.3Wavelength μm

2500

2000

1500

1000

500

UV VIS INFRARED

Solar irradiation curve outside atmosphere

Solar irradiation curve at sea levelPeaks show absorption by H20, C02, and O3

Page 38: ERDAS Field Guide

8 Raster Data

Figure 5: Factors Affecting Radiation

Source: Elachi, 1987Absorption is not a linear phenomena—it is logarithmic with concentration (Flaschka, 1969). In addition, the concentration of atmospheric gases, especially water vapor, is variable. The other major gases of importance are carbon dioxide (CO2) and ozone (O3), which can vary considerably around urban areas. Thus, the extent of atmospheric absorbance varies with humidity, elevation, proximity to (or downwind of) urban smog, and other factors. Scattering is modeled as Rayleigh scattering with a commonly used algorithm that accounts for the scattering of short wavelength energy by the gas molecules in the atmosphere (Pratt, 1991)—for example, ozone. Scattering is variable with both wavelength and atmospheric aerosols. Aerosols differ regionally (ocean vs. desert) and daily (for example, Los Angeles smog has different concentrations daily). Scattering source and emission source may account for only 5% of the variance. These factors are minor, but they must be considered for accurate calculation. After interaction with the target material, the reflected radiation must travel back through the atmosphere and be subjected to these phenomena a second time to arrive at the satellite.

Absorption—the amount of

Scattering—the amount of radiation

Scattering Source—divergent solar

Emission Source—radiation

Radiation

radiation absorbed by theatmosphere

re-emitted after absorption

scattered away from the field of view

irradiations scattered into thefield of view

by the atmosphere

Page 39: ERDAS Field Guide

Raster Data 9

The mathematical models that attempt to quantify the total atmospheric effect on the solar illumination are called radiative transfer equations. Some of the most commonly used are Lowtran (Kneizys et al, 1988) and Modtran (Berk et al, 1989).

See "Enhancement" on page 455 for more information on atmospheric modeling.

Reflectance SpectraAfter rigorously defining the incident radiation (solar irradiation at target), it is possible to study the interaction of the radiation with the target material. When an electromagnetic wave (solar illumination in this case) strikes a target surface, three interactions are possible (Elachi, 1987):

• reflection

• transmission

• scattering

It is the reflected radiation, generally modeled as bidirectional reflectance (Clark and Roush, 1984), that is measured by the remote sensor.Remotely sensed data are made up of reflectance values. The resulting reflectance values translate into discrete digital numbers (or values) recorded by the sensing device. These gray scale values fit within a certain bit range (such as 0 to 255, which is 8-bit data) depending on the characteristics of the sensor.Each satellite sensor detector is designed to record a specific portion of the electromagnetic spectrum. For example, Landsat Thematic Mapper (TM) band 1 records the 0.45 to 0.52 μm portion of the spectrum and is designed for water body penetration, making it useful for coastal water mapping. It is also useful for soil/vegetation discriminations, forest type mapping, and cultural features identification (Lillesand and Kiefer, 1987).The characteristics of each sensor provide the first level of constraints on how to approach the task of enhancing specific features, such as vegetation or urban areas. Therefore, when choosing an enhancement technique, one should pay close attention to the characteristics of the land cover types within the constraints imposed by the individual sensors.

Page 40: ERDAS Field Guide

10 Raster Data

The use of VIS/IR imagery for target discrimination, whether the target is mineral, vegetation, man-made, or even the atmosphere itself, is based on the reflectance spectrum of the material of interest (see Figure 6). Every material has a characteristic spectrum based on the chemical composition of the material. When sunlight (the illumination source for VIS/IR imagery) strikes a target, certain wavelengths are absorbed by the chemical bonds; the rest are reflected back to the sensor. It is, in fact, the wavelengths that are not returned to the sensor that provide information about the imaged area. Specific wavelengths are also absorbed by gases in the atmosphere (H2O vapor, CO2, O2, and so forth). If the atmosphere absorbs a large percentage of the radiation, it becomes difficult or impossible to use that particular wavelength(s) to study the Earth. For the present Landsat and Systeme Pour l’observation de la Terre (SPOT) sensors, only the water vapor bands are considered strong enough to exclude the use of their spectral absorption region. Figure 6 shows how Landsat TM bands 5 and 7 were carefully placed to avoid these regions. Absorption by other atmospheric gases was not extensive enough to eliminate the use of the spectral region for present day broad band sensors.

Figure 6: Reflectance Spectra

100

80

60

40

20

0

Ref

lect

ance

, %

.4 .6 .8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4

Wavelength, μm

Vegetation (green)

Silt loam

Atmospheric

bands

4 5 6 7 Landsat MSS bands

1 2 3 4 5 7Landsat TM bands

Kaolinite

absorption

Page 41: ERDAS Field Guide

Raster Data 11

Source: Modified from Fraser, 1986;Crist et al, 1986; Sabins, 1987

NOTE: This chart is for comparison purposes only. It is not meant to show actual values. The spectra are offset to better display the lines.

An inspection of the spectra reveals the theoretical basis of some of the indices in the ERDAS IMAGINE Image Interpreter. Consider the vegetation index TM4/TM3. It is readily apparent that for vegetation this value could be very large. For soils, the value could be much smaller, and for clay minerals, the value could be near zero. Conversely, when the clay ratio TM5/TM7 is considered, the opposite applies.

Hyperspectral DataAs remote sensing moves toward the use of more and narrower bands (for example, AVIRIS with 224 bands each only 10 nm wide), absorption by specific atmospheric gases must be considered. These multiband sensors are called hyperspectral sensors. As more and more of the incident radiation is absorbed by the atmosphere, the digital number (DN) values of that band get lower, eventually becoming useless—unless one is studying the atmosphere. Someone wanting to measure the atmospheric content of a specific gas could utilize the bands of specific absorption.

NOTE: Hyperspectral bands are generally measured in nanometers (nm).

Figure 6 shows the spectral bandwidths of the channels for the Landsat sensors plotted above the absorption spectra of some common natural materials (kaolin clay, silty loam soil, and green vegetation). Note that while the spectra are continuous, the Landsat channels are segmented or discontinuous. We can still use the spectra in interpreting the Landsat data. For example, a Normalized Difference Vegetation Index (NDVI) ratio for the three would be very different and, therefore, could be used to discriminate between the three materials. Similarly, the ratio TM5/TM7 is commonly used to measure the concentration of clay minerals. Evaluation of the spectra shows why.Figure 7 shows detail of the absorption spectra of three clay minerals. Because of the wide bandpass (2080 to 2350 nm) of TM band 7, it is not possible to discern between these three minerals with the Landsat sensor. As mentioned, the AVIRIS hyperspectral sensor has a large number of approximately 10 nm wide bands. With the proper selection of band ratios, mineral identification becomes possible. With this data set, it would be possible to discriminate between these three clay minerals, again using band ratios. For example, a color composite image prepared from RGB = 2160nm/2190nm, 2220nm/2250nm, 2350nm/2488nm could produce a color-coded clay mineral image-map.

Page 42: ERDAS Field Guide

12 Raster Data

The commercial airborne multispectral scanners are used in a similar fashion. The Airborne Imaging Spectrometer from the Geophysical & Environmental Research Corp. (GER) has 79 bands in the UV, visible, SWIR, and thermal-infrared regions. The Airborne Multispectral Scanner Mk2 by Geoscan Pty, Ltd., has up to 52 bands in the visible, SWIR, and thermal-infrared regions. To properly utilize these hyperspectral sensors, you must understand the phenomenon involved and have some idea of the target materials being sought.

Figure 7: Laboratory Spectra of Clay Minerals in the Infrared Region

Source: Modified from Sabins, 1987

NOTE: Spectra are offset vertically for clarity.

2000 2200 2400 2600

Landsat TM band 7

2080 nm 2350 nm

Kaolinite

Montmorillonite

Illite

Ref

lect

ance

, %

Wavelength, nm

Page 43: ERDAS Field Guide

Raster Data 13

The characteristics of Landsat, AVIRIS, and other data types are discussed in "Raster and Vector Data Sources" on page 55. See "Enhancement" on page 455 for more information on the NDVI ratio.

Imaging Radar DataRadar remote sensors can be broken into two broad categories: passive and active. The passive sensors record the very low intensity, microwave radiation naturally emitted by the Earth. Because of the very low intensity, these images have low spatial resolution (that is, large pixel size).It is the active sensors, termed imaging radar, that are introducing a new generation of satellite imagery to remote sensing. To produce an image, these satellites emit a directed beam of microwave energy at the target, and then collect the backscattered (reflected) radiation from the target scene. Because they must emit a powerful burst of energy, these satellites require large solar collectors and storage batteries. For this reason, they cannot operate continuously; some satellites are limited to 10 minutes of operation per hour.The microwave energy emitted by an active radar sensor is coherent and defined by a narrow bandwidth. The following table summarizes the bandwidths used in remote sensing.

*Wavelengths commonly used in imaging radars are shown in parentheses.

Band Designation* Wavelength (λ), cm Frequency (υ), GHz(109 cycles · sec-1)

Ka (0.86 cm) 0.8 to 1.1 40.0 to 26.5

K 1.1 to 1.7 26.5 to 18.0

Ku 1.7 to 2.4 18.0 to 12.5

X (3.0 cm, 3.2 cm) 2.4 to 3.8 12.5 to 8.0

C 3.8 to 7.5 8.0 to 4.0

S 7.5 to 15.0 4.0 to 2.0

L (23.5 cm, 25.0 cm) 15.0 to 30.0 2.0 to 1.0

P 30.0 to 100.0 1.0 to 0.3

Page 44: ERDAS Field Guide

14 Raster Data

A key element of a radar sensor is the antenna. For a given position in space, the resolution of the resultant image is a function of the antenna size. This is termed a real-aperture radar (RAR). At some point, it becomes impossible to make a large enough antenna to create the desired spatial resolution. To get around this problem, processing techniques have been developed which combine the signals received by the sensor as it travels over the target. Thus, the antenna is perceived to be as long as the sensor path during backscatter reception. This is termed a synthetic aperture and the sensor a synthetic aperture radar (SAR).The received signal is termed a phase history or echo hologram. It contains a time history of the radar signal over all the targets in the scene, and is itself a low resolution RAR image. In order to produce a high resolution image, this phase history is processed through a hardware/software system called an SAR processor. The SAR processor software requires operator input parameters, such as information about the sensor flight path and the radar sensor's characteristics, to process the raw signal data into an image. These input parameters depend on the desired result or intended application of the output imagery.One of the most valuable advantages of imaging radar is that it creates images from its own energy source and therefore is not dependent on sunlight. Thus one can record uniform imagery any time of the day or night. In addition, the microwave frequencies at which imaging radars operate are largely unaffected by the atmosphere. This allows image collection through cloud cover or rain storms. However, the backscattered signal can be affected. Radar images collected during heavy rainfall are often seriously attenuated, which decreases the signal-to-noise ratio (SNR). In addition, the atmosphere does cause perturbations in the signal phase, which decreases resolution of output products, such as the SAR image or generated DEMs.

Resolution Resolution is a broad term commonly used to describe:

• the number of pixels you can display on a display device, or

• the area on the ground that a pixel represents in an image file.

These broad definitions are inadequate when describing remotely sensed data. Four distinct types of resolution must be considered:

• spectral—the specific wavelength intervals that a sensor can record

• spatial—the area on the ground represented by each pixel

• radiometric—the number of possible data file values in each band (indicated by the number of bits into which the recorded energy is divided)

Page 45: ERDAS Field Guide

Raster Data 15

• temporal—how often a sensor obtains imagery of a particular area

These four domains contain separate information that can be extracted from the raw data.

Spectral Resolution Spectral resolution refers to the specific wavelength intervals in the electromagnetic spectrum that a sensor can record (Simonett et al, 1983). For example, band 1 of the Landsat TM sensor records energy between 0.45 and 0.52 μm in the visible part of the spectrum. Wide intervals in the electromagnetic spectrum are referred to as coarse spectral resolution, and narrow intervals are referred to as fine spectral resolution. For example, the SPOT panchromatic sensor is considered to have coarse spectral resolution because it records EMR between 0.51 and 0.73 μm. On the other hand, band 3 of the Landsat TM sensor has fine spectral resolution because it records EMR between 0.63 and 0.69 μm (Jensen, 1996).

NOTE: The spectral resolution does not indicate how many levels the signal is broken into.

Spatial Resolution Spatial resolution is a measure of the smallest object that can be resolved by the sensor, or the area on the ground represented by each pixel (Simonett et al, 1983). The finer the resolution, the lower the number. For instance, a spatial resolution of 79 meters is coarser than a spatial resolution of 10 meters.

ScaleThe terms large-scale imagery and small-scale imagery often refer to spatial resolution. Scale is the ratio of distance on a map as related to the true distance on the ground (Star and Estes, 1990). Large-scale in remote sensing refers to imagery in which each pixel represents a small area on the ground, such as SPOT data, with a spatial resolution of 10 m or 20 m. Small scale refers to imagery in which each pixel represents a large area on the ground, such as Advanced Very High Resolution Radiometer (AVHRR) data, with a spatial resolution of 1.1 km. This terminology is derived from the fraction used to represent the scale of the map, such as 1:50,000. Small-scale imagery is represented by a small fraction (one over a very large number). Large-scale imagery is represented by a larger fraction (one over a smaller number). Generally, anything smaller than 1:250,000 is considered small-scale imagery.

NOTE: Scale and spatial resolution are not always the same thing. An image always has the same spatial resolution, but it can be presented at different scales (Simonett et al, 1983).

Page 46: ERDAS Field Guide

16 Raster Data

Instantaneous Field of ViewSpatial resolution is also described as the instantaneous field of view (IFOV) of the sensor, although the IFOV is not always the same as the area represented by each pixel. The IFOV is a measure of the area viewed by a single detector in a given instant in time (Star and Estes, 1990). For example, Landsat MSS data have an IFOV of 79 × 79 meters, but there is an overlap of 11.5 meters in each pass of the scanner, so the actual area represented by each pixel is 56.5 × 79 meters (usually rounded to 57 × 79 meters). Even though the IFOV is not the same as the spatial resolution, it is important to know the number of pixels into which the total field of view for the image is broken. Objects smaller than the stated pixel size may still be detectable in the image if they contrast with the background, such as roads, drainage patterns, and so forth. On the other hand, objects the same size as the stated pixel size (or larger) may not be detectable if there are brighter or more dominant objects nearby. In Figure 8, a house sits in the middle of four pixels. If the house has a reflectance similar to its surroundings, the data file values for each of these pixels reflect the area around the house, not the house itself, since the house does not dominate any one of the four pixels. However, if the house has a significantly different reflectance than its surroundings, it may still be detectable.

Figure 8: IFOV

Radiometric Resolution Radiometric resolution refers to the dynamic range, or number of possible data file values in each band. This is referred to by the number of bits into which the recorded energy is divided.

20 m

20 m

house

20 m

20 m

Page 47: ERDAS Field Guide

Raster Data 17

For instance, in 8-bit data, the data file values range from 0 to 255 for each pixel, but in 7-bit data, the data file values for each pixel range from 0 to 128. In Figure 9, 8-bit and 7-bit data are illustrated. The sensor measures the EMR in its range. The total intensity of the energy from 0 to the maximum amount the sensor measures is broken down into 256 brightness values for 8-bit data, and 128 brightness values for 7-bit data.

Figure 9: Brightness Values

Temporal Resolution Temporal resolution refers to how often a sensor obtains imagery of a particular area. For example, the Landsat satellite can view the same area of the globe once every 16 days. SPOT, on the other hand, can revisit the same area every three days.

NOTE: Temporal resolution is an important factor to consider in change detection studies.

Figure 10: illustrates all four types of resolution:

Figure 10: Landsat TM—Band 2 (Four Types of Resolution)

8-bit

0

7-bit

0 1 2 3 4 5 122 123 124 125 126 127

0 max. intensity

244 25524911 109876543210

max. intensity

Spatial Resolution:

1 pixel = 79 m ¥ 79 m

Temporal Resolution: same area viewed every 16 days

79 m79 m

Day 1Day 17

Day 31

Radiometric Resolution:

8-bit (0 - 255)

Spectral Resolution:

0.52 - 0.60 μm

Page 48: ERDAS Field Guide

18 Raster Data

Source: EOSAT

Data Correction There are several types of errors that can be manifested in remotely sensed data. Among these are line dropout and striping. These errors can be corrected to an extent in GIS by radiometric and geometric correction functions.

NOTE: Radiometric errors are usually already corrected in data from EOSAT or SPOT.

See "Enhancement" on page 455 for more information on radiometric and geometric correction.

Line Dropout Line dropout occurs when a detector either completely fails to function or becomes temporarily saturated during a scan (like the effect of a camera flash on a human retina). The result is a line or partial line of data with higher data file values, creating a horizontal streak until the detector(s) recovers, if it recovers. Line dropout is usually corrected by replacing the bad line with a line of estimated data file values. The estimated line is based on the lines above and below it.

You can correct line dropout using the 5 × 5 Median Filter from the Radar Speckle Suppression function. The Convolution and Focal Analysis functions in the ERDAS IMAGINE Image Interpreter also corrects for line dropout.

Striping Striping or banding occurs if a detector goes out of adjustment—that is, it provides readings consistently greater than or less than the other detectors for the same band over the same ground cover.

Use ERDAS IMAGINE Image Interpreter or ERDAS IMAGINE Spatial Modeler for implementing algorithms to eliminate striping. The ERDAS IMAGINE Spatial Modeler editing capabilities allow you to adapt the algorithms to best address the data.

Data Storage Image data can be stored on a variety of media—tapes, CD-ROMs, or DVD-ROMs, for example—but how the data are stored (for example, structure) is more important than on what they are stored.

Page 49: ERDAS Field Guide

Raster Data 19

All computer data are in binary format. The basic unit of binary data is a bit. A bit can have two possible values—0 and 1, or “off” and “on” respectively. A set of bits, however, can have many more values, depending on the number of bits used. The number of values that can be expressed by a set of bits is 2 to the power of the number of bits used. A byte is 8 bits of data. Generally, file size and disk space are referred to by number of bytes. For example, a PC may have 256 megabytes of RAM (random access memory), or a file may need 55,698 bytes of disk space. 1,024 bytes = 1 kilobyte. A megabyte (Mb) is about one million bytes. A gigabyte (Gb) is about one billion bytes.

Storage Formats Image data can be arranged in several ways on a tape or other media. The most common storage formats are:

• BIL (band interleaved by line)

• BSQ (band sequential)

• BIP (band interleaved by pixel)

For a single band of data, all formats (BIL, BIP, and BSQ) are identical, as long as the data are not blocked.

Blocked data are discussed under Storage Media on page 22.

BILIn BIL (band interleaved by line) format, each record in the file contains a scan line (row) of data for one band (Slater, 1980). All bands of data for a given line are stored consecutively within the file as shown in Figure 11.

Page 50: ERDAS Field Guide

20 Raster Data

Figure 11: Band Interleaved by Line (BIL)

NOTE: Although a header and trailer file are shown in this diagram, not all BIL data contain header and trailer files.

BSQIn BSQ (band sequential) format, each entire band is stored consecutively in the same file (Slater, 1980). This format is advantageous, in that:

• one band can be read and viewed easily, and

• multiple bands can be easily loaded in any order.

Header

Trailer

Image Line 1, Band 1Line 1, Band 2

+++

Line 1, Band x

Line 2, Band 1Line 2, Band 2

+++

Line 2, Band x

Line n, Band 1Line n, Band 2

+++

Line n, Band x

Page 51: ERDAS Field Guide

Raster Data 21

Figure 12: Band Sequential (BSQ)

Landsat TM data are stored in a type of BSQ format known as fast format. Fast format data have the following characteristics:

• Files are not split between tapes. If a band starts on the first tape, it ends on the first tape.

• An end-of-file (EOF) marker follows each band.

• An end-of-volume marker marks the end of each volume (tape). An end-of-volume marker consists of three end-of-file markers.

• There is one header file per tape.

• There are no header records preceding the image data.

• Regular products (not geocoded) are normally unblocked. Geocoded products are normally blocked (EOSAT).

ERDAS IMAGINE imports all of the header and image file information.

Header File(s)

end-of-file

end-of-file

Trailer File(s)

Image FileBand 1

Image File

Band 2

Image FileBand x

Line 1, Band 1Line 2, Band 2Line 3, Band 3

+++

Line n, Band 1

Line 1, Band 2Line 2, Band 2Line 3, Band 2

+++

Line n, Band 2

Line 1, Band xLine 2, Band xLine 3, Band x

+++

Line n, Band x

Page 52: ERDAS Field Guide

22 Raster Data

See Geocoded Data on page 31 for more information on geocoded data.

BIPIn BIP (band interleaved by pixel) format, the values for each band are ordered within a given pixel. The pixels are arranged sequentially on the tape (Slater, 1980). The sequence for BIP format is:

Pixel 1, Band 1 Pixel 1, Band 2 Pixel 1, Band 3 ...Pixel 2, Band 1 Pixel 2, Band 2 Pixel 2, Band 3 ...

Storage Media Today, most raster data are available on a variety of storage media to meet the needs of users, depending on the system hardware and devices available. When ordering data, it is sometimes possible to select the type of media preferred. The most common forms of storage media are discussed in the following section:

• 9-track tape

• 4 mm tape

• 8 mm tape

• 1/4” cartridge tape

• CD-ROM/optical disk

• DVD-ROM

Other types of storage media are:

• floppy disk (3.5” or 5.25”)

• film, photograph, or paper

Page 53: ERDAS Field Guide

Raster Data 23

• videotape

TapeThe data on a tape can be divided into logical records and physical records. A record is the basic storage unit on a tape.

• A logical record is a series of bytes that form a unit. For example, all the data for one line of an image may form a logical record.

• A physical record is a consecutive series of bytes on a magnetic tape, followed by a gap, or blank space, on the tape.

Blocked DataFor reasons of efficiency, data can be blocked to fit more on a tape. Blocked data are sequenced so that there are more logical records in each physical record. The number of logical records in each physical record is the blocking factor. For instance, a record may contain 28,000 bytes, but only 4000 columns due to a blocking factor of 7.

Tape ContentsTapes are available in a variety of sizes and storage capacities. To obtain information about the data on a particular tape, read the tape label or box, or read the header file. Often, there is limited information on the outside of the tape. Therefore, it may be necessary to read the header files on each tape for specific information, such as:

• number of tapes that hold the data set

• number of columns (in pixels)

• number of rows (in pixels)

• data storage format—BIL, BSQ, BIP

• pixel depth—4-bit, 8-bit, 10-bit, 12-bit, 16-bit

• number of bands

• blocking factor

• number of header files and header records

4 mm TapesThe 4 mm tape is a relative newcomer in the world of GIS. This tape is a mere 2” × .75” in size, but it can hold up to 2 Gb of data. This petite cassette offers an obvious shipping and storage advantage because of its size.

Page 54: ERDAS Field Guide

24 Raster Data

8 mm TapesThe 8 mm tape offers the advantage of storing vast amounts of data. Tapes are available in 5 and 10 Gb storage capacities (although some tape drives cannot handle the 10 Gb size). The 8 mm tape is a 2.5” × 4” cassette, which makes it easy to ship and handle.

1/4” Cartridge TapesThis tape format falls between the 8 mm and 9-track in physical size and storage capacity. The tape is approximately 4” × 6” in size and stores up to 150 Mb of data.

9-Track Tapes A 9-track tape is an older format that was the standard for two decades. It is a large circular tape approximately 10” in diameter. It requires a 9-track tape drive as a peripheral device for retrieving data. The size and storage capability make 9-track less convenient than 8 mm or 1/4” tapes. However, 9-track tapes are still widely used. A single 9-track tape may be referred to as a volume. The complete set of tapes that contains one image is referred to as a volume set. The storage format of a 9-track tape in binary format is described by the number of bits per inch, bpi, on the tape. The tapes most commonly used have either 1600 or 6250 bpi. The number of bits per inch on a tape is also referred to as the tape density. Depending on the length of the tape, 9-tracks can store between 120-150 Mb of data.

CD-ROMData such as ADRG and Digital Line Graphs (DLG) are most often available on CD-ROM, although many types of data can be requested in CD-ROM format. A CD-ROM is an optical read-only storage device which can be read with a CD player. CD-ROMs offer the advantage of storing large amounts of data in a small, compact device. Up to 644 Mb can be stored on a CD-ROM. Also, since this device is read-only, it protects the data from accidentally being overwritten, erased, or changed from its original integrity. This is the most stable of the current media storage types and data stored on CD-ROM are expected to last for decades without degradation.

DVD-ROMDVD-ROM is an optical disk storage device which is read by a DVD drive in a computer. A single-sided, one-layered disk has 4.7 Gb storage capacity. DVDs are available in single-sided or double-sided format, and each side can have one or two layers. Double-sided, single-layer, and single-sided, double-layer DVDs can store about 8.5 Gb. Double-sided, double-layer DVDs can store about 15.9Gb. Development of next-generation DVDs continues.

Page 55: ERDAS Field Guide

Raster Data 25

Calculating Disk Space To calculate the amount of disk space a raster file requires on an ERDAS IMAGINE system, use the following formula:

Where:y = rows x = columns b = number of bytes per pixel n = number of bands 1.4 adds 30% to the file size for pyramid layers and 10% for

miscellaneous adjustments, such as histograms, lookup tables, and so forth.

NOTE: This output file size is approximate. See Pyramid Layers on page 162 for more information.

For example, to load a 3 band, 16-bit file with 500 rows and 500 columns, about 2,100,000 bytes of disk space is needed.

Bytes Per PixelThe number of bytes per pixel is listed below:

4-bit data: .58-bit data: 1.016-bit data: 2.0

NOTE: On the PC, disk space is shown in bytes. On the workstation, disk space is shown as kilobytes (1,024 bytes).

ERDAS IMAGINE Format (.img)

In ERDAS IMAGINE, file name extensions identify the file type. When data are imported into ERDAS IMAGINE, they are converted to the ERDAS IMAGINE file format and stored in image files. ERDAS IMAGINE image files (.img) can contain two types of raster layers:

• thematic

• continuous

An image file can store a combination of thematic and continuous layers, or just one type.

x y× b×( )( n×[ ) ] 1.4× output file size=

500 500×( ) 2×( ) 3×[ ] 1.4× 2 100 000, ,= or 2.1 Mb

Page 56: ERDAS Field Guide

26 Raster Data

Figure 13: Image Files Store Raster Layers

ERDAS Version 7.5 UsersFor Version 7.5 users, when importing a GIS file from Version 7.5, it becomes an image file with one thematic raster layer. When importing a LAN file, each band becomes a continuous raster layer within an image file.

Thematic Raster LayerThematic data are raster layers that contain qualitative, categorical information about an area. A thematic layer is contained within an image file. Thematic layers lend themselves to applications in which categories or themes are used. Thematic raster layers are used to represent data measured on a nominal or ordinal scale, such as:

• soils

• land use

• land cover

• roads

• hydrology

NOTE: Thematic raster layers are displayed as pseudo color layers.

Image File (.img)

Raster Layer(s)

Thematic Raster Layer(s) Continuous Raster Layer(s)

Page 57: ERDAS Field Guide

Raster Data 27

Figure 14: Example of a Thematic Raster Layer

See "Image Display" on page 145 for information on displaying thematic raster layers.

Continuous Raster LayerContinuous data are raster layers that contain quantitative (measuring a characteristic on an interval or ratio scale) and related, continuous values. Continuous raster layers can be multiband (for example, Landsat TM data) or single band (for example, SPOT panchromatic data). The following types of data are examples of continuous raster layers:

• Landsat

• SPOT

• digitized (scanned) aerial photograph

• DEM

• slope

• temperature

NOTE: Continuous raster layers can be displayed as either a gray scale raster layer or a true color raster layer.

soils

Page 58: ERDAS Field Guide

28 Raster Data

Figure 15: Examples of Continuous Raster Layers

Tiled DataData in the .img format are tiled data. Tiled data are stored in tiles that can be set to any size.

The default tile size for image files is 512 × 512 pixels.

Image File ContentsThe image files contain the following additional information about the data:

• the data file values

• statistics

• lookup tables

• map coordinates

• map projection

This additional information can be viewed using the Image Information function located on the Viewer’s tool bar.

StatisticsIn ERDAS IMAGINE, the file statistics are generated from the data file values in the layer and incorporated into the image file. This statistical information is used to create many program defaults, and helps you make processing decisions.

Landsat TM DEM

Page 59: ERDAS Field Guide

Raster Data 29

Pyramid LayersSometimes a large image takes longer than normal to display in the Viewer. The pyramid layer option enables you to display large images faster. Pyramid layers are image layers which are successively reduced by the power of 2 and resampled.

The Pyramid Layer option is available in the Image Information function located on the Viewer’s tool bar and, from the Import function.

See "Image Display" on page 145 for more information on pyramid layers. See the On-Line Help for detailed information on ERDAS IMAGINE file formats.

Image File Organization

Data are easy to locate if the data files are well organized. Well organized files also make data more accessible to anyone who uses the system. Using consistent naming conventions and the ERDAS IMAGINE Image Catalog helps keep image files well organized and accessible.

Consistent Naming Convention

Many processes create an output file, and every time a file is created, it is necessary to assign a file name. The name that is used can either cause confusion about the process that has taken place, or it can clarify and give direction. For example, if the name of the output file is image.img, it is difficult to determine the contents of the file. On the other hand, if a standard nomenclature is developed in which the file name refers to a process or contents of the file, it is possible to determine the progress of a project and contents of a file by examining the directory.Develop a naming convention that is based on the contents of the file. This helps everyone involved know what the file contains. For example, in a project to create a map composition for Lake Lanier, a directory for the files may look similar to the one below:

lanierTM.imglanierSPOT.imglanierSymbols.ovrlanierlegends.map.ovrlanierScalebars.map.ovrlanier.maplanier.pltlanier.gcclanierUTM.img

Page 60: ERDAS Field Guide

30 Raster Data

From this listing, one can make some educated guesses about the contents of each file based on naming conventions used. For example, lanierTM.img is probably a Landsat TM scene of Lake Lanier. The file lanier.map is probably a map composition that has map frames with lanierTM.img and lanierSPOT.img data in them. The file lanierUTM.img was probably created when lanierTM.img was rectified to a UTM map projection.

Keeping Track of Image Files

Using a database to store information about images enables you to track image files (.img) without having to know the name or location of the file. The database can be queried for specific parameters (for example, size, type, map projection) and the database returns a list of image files that match the search criteria. This file information helps to quickly determine which image(s) to use, where it is located, and its ancillary data. An image database is especially helpful when there are many image files and even many on-going projects. For example, you could use the database to search for all of the image files of Georgia that have a UTM map projection.

Use the ERDAS IMAGINE Image Catalog to track and store information for image files (.img) that are imported and created in ERDAS IMAGINE.

NOTE: All information in the Image Catalog database, except archive information, is extracted from the image file header. Therefore, if this information is modified in the Image Information utility, it is necessary to recatalog the image in order to update the information in the Image Catalog database.

ERDAS IMAGINE Image CatalogThe ERDAS IMAGINE Image Catalog database is designed to serve as a library and information management system for image files (.img) that are imported and created in ERDAS IMAGINE. The information for the image files is displayed in the Image Catalog CellArray™. This CellArray enables you to view all of the ancillary data for the image files in the database. When records are queried based on specific criteria, the image files that match the criteria are highlighted in the CellArray. It is also possible to graphically view the coverage of the selected image files on a map in a canvas window. When it is necessary to store some data on a tape, the ERDAS IMAGINE Image Catalog database enables you to archive image files to external devices. The Image Catalog CellArray shows which tape the image file is stored on, and the file can be easily retrieved from the tape device to a designated disk directory. The archived image files are copies of the files on disk—nothing is removed from the disk. Once the file is archived, it can be removed from the disk, if you like.

Page 61: ERDAS Field Guide

Raster Data 31

Geocoded Data Geocoding, also known as georeferencing, is the geographical registration or coding of the pixels in an image. Geocoded data are images that have been rectified to a particular map projection and pixel size. Raw, remotely-sensed image data are gathered by a sensor on a platform, such as an aircraft or satellite. In this raw form, the image data are not referenced to a map projection. Rectification is the process of projecting the data onto a plane and making them conform to a map projection system. It is possible to geocode raw image data with the ERDAS IMAGINE rectification tools. Geocoded data are also available from Space Imaging EOSAT and SPOT.

See"Map Projections" on page 297 for detailed information on the different projections available. See "Rectification" on page 251 for information on geocoding raw imagery with ERDAS IMAGINE.

Using Image Data in GIS

ERDAS IMAGINE provides many tools designed to extract the necessary information from the images in a database. The following chapters in this book describe many of these processes. This section briefly describes some basic image file techniques that may be useful for any application.

Subsetting and Mosaicking

Within ERDAS IMAGINE, there are options available to make additional image files from those acquired from EOSAT, SPOT, and so forth. These options involve combining files, mosaicking, and subsetting. ERDAS IMAGINE programs allow image data with an unlimited number of bands, but the most common satellite data types—Landsat and SPOT—have seven or fewer bands. Image files can be created with more than seven bands. It may be useful to combine data from two different dates into one file. This is called multitemporal imagery. For example, a user may want to combine Landsat TM from one date with TM data from a later date, then perform a classification based on the combined data. This is particularly useful for change detection studies.You can also incorporate elevation data into an existing image file as another band, or create new bands through various enhancement techniques.

Page 62: ERDAS Field Guide

32 Raster Data

To combine two or more image files, each file must be georeferenced to the same coordinate system, or to each other. See "Rectification" on page 251 for information on georeferencing images.

SubsetSubsetting refers to breaking out a portion of a large file into one or more smaller files. Often, image files contain areas much larger than a particular study area. In these cases, it is helpful to reduce the size of the image file to include only the area of interest (AOI). This not only eliminates the extraneous data in the file, but it speeds up processing due to the smaller amount of data to process. This can be important when dealing with multiband data.

The ERDAS IMAGINE Import option often lets you define a subset area of an image to preview or import. You can also use the Subset option from ERDAS IMAGINE Image Interpreter to define a subset area.

MosaicOn the other hand, the study area in which you are interested may span several image files. In this case, it is necessary to combine the images to create one large file. This is called mosaicking.

To create a mosaicked image, use the Mosaic Images option from the Data Preparation menu.

Enhancement Image enhancement is the process of making an image more interpretable for a particular application (Faust, 1989). Enhancement can make important features of raw, remotely sensed data and aerial photographs more interpretable to the human eye. Enhancement techniques are often used instead of classification for extracting useful information from images. There are many enhancement techniques available. They range in complexity from a simple contrast stretch, where the original data file values are stretched to fit the range of the display device, to principal components analysis, where the number of image file bands can be reduced and new bands created to account for the most variance in the data.

Page 63: ERDAS Field Guide

Raster Data 33

See "Enhancement" on page 455 for more information on enhancement techniques.

Multispectral Classification

Image data are often used to create thematic files through multispectral classification. This entails using spectral pattern recognition to identify groups of pixels that represent a common characteristic of the scene, such as soil type or vegetation.

See "Classification" on page 545 for a detailed explanation of classification procedures.

Editing Raster Data

ERDAS IMAGINE provides raster editing tools for editing the data values of thematic and continuous raster data. This is primarily a correction mechanism that enables you to correct bad data values which produce noise, such as spikes and holes in imagery. The raster editing functions can be applied to the entire image or a user-selected area of interest (AOI).With raster editing, data values in thematic data can also be recoded according to class. Recoding is a function that reassigns data values to a region or to an entire class of pixels.

See "Geographic Information Systems" on page 173 for information about recoding data. See "Enhancement" on page 455 for information about reducing data noise using spatial filtering.

The ERDAS IMAGINE raster editing functions allow the use of focal and global spatial modeling functions for computing the values to replace noisy pixels or areas in continuous or thematic data. Focal operations are filters that calculate the replacement value based on a window (3 × 3, 5 × 5, and so forth), and replace the pixel of interest with the replacement value. Therefore this function affects one pixel at a time, and the number of surrounding pixels that influence the value is determined by the size of the moving window.Global operations calculate the replacement value for an entire area rather than affecting one pixel at a time. These functions, specifically the Majority option, are more applicable to thematic data.

See the ERDAS IMAGINE On-Line Help for information about using and selecting AOIs.

Page 64: ERDAS Field Guide

34 Raster Data

The raster editing tools are available in the Viewer.

Editing Continuous (Athematic) Data

Editing DEMsDEMs occasionally contain spurious pixels or bad data. These spikes, holes, and other noises caused by automatic DEM extraction can be corrected by editing the raster data values and replacing them with meaningful values. This discussion of raster editing focuses on DEM editing.

The ERDAS IMAGINE Raster Editing functionality was originally designed to edit DEMs, but it can also be used with images of other continuous data sources, such as radar, SPOT, Landsat, and digitized photographs.

When editing continuous raster data, you can modify or replace original pixel values with the following:

• a constant value—enter a known constant value for areas such as lakes.

• the average of the buffering pixels—replace the original pixel value with the average of the pixels in a specified buffer area around the AOI. This is used where the constant values of the AOI are not known, but the area is flat or homogeneous with little variation (for example, a lake).

• the original data value plus a constant value—add a negative constant value to the original data values to compensate for the height of trees and other vertical features in the DEM. This technique is commonly used in forested areas.

• spatial filtering—filter data values to eliminate noise such as spikes or holes in the data.

• interpolation techniques (discussed below).

Interpolation Techniques While the previously listed raster editing techniques are perfectly suitable for some applications, the following interpolation techniques provide the best methods for raster editing:

• 2-D polynomial—surface approximation

• multisurface functions—with least squares prediction

Page 65: ERDAS Field Guide

Raster Data 35

• distance weighting

Each pixel’s data value is interpolated from the reference points in the data file. These interpolation techniques are described below:

2-D PolynomialThis interpolation technique provides faster interpolation calculations than distance weighting and multisurface functions. The following equation is used:

V = a1 + a1x + a2y + a2x2 + a4xy + a5y2 +. . . Where:

V = data value (elevation value for DEM)a = polynomial coefficientsx = x coordinate y = y coordinate

Multisurface FunctionsThe multisurface technique provides the most accurate results for editing DEMs that have been created through automatic extraction. The following equation is used:

Where:V = output data value (elevation value for DEM)Wi = coefficients which are derived by the least squares

methodQi = distance-related kernels which are actually interpretable

as continuous single value surfaces Source: Wang, Z., 1990

Distance Weighting The weighting function determines how the output data values are interpolated from a set of reference data points. For each pixel, the values of all reference points are weighted by a value corresponding with the distance between each point and the pixel. The weighting function used in ERDAS IMAGINE is:

V WiQi∑=

W SD---- 1–⎝ ⎠

⎛ ⎞ 2=

Page 66: ERDAS Field Guide

36 Raster Data

Where: S = normalization factorD = distance from output data point and reference point

The value for any given pixel is calculated by taking the sum of weighting factors for all reference points multiplied by the data values of those points, and dividing by the sum of the weighting factors:

Where:V = output data value (elevation value for DEM)i = ith reference pointWi = weighting factor of point iVi = data value of point in = number of reference points

Source: Wang, Z., 1990

Image Compression

Dynamic Range Run-Length Encoding (DR RLE)

The IMAGINE IMG format can use a simple lossless form of compression that can be referred to as Dynamic Range Run-Length Encoding (DR RLE). This combines two different forms of compression to give a quick and effective reduction of dataset size. This compression is most effective on thematic data, but can also be effective on continuous data with low variance or low dynamic range.

V

Wi Vi×

i 1=

n

Wi

i 1=

n

---------------------------=

Page 67: ERDAS Field Guide

Raster Data 37

Dynamic Range refers to the span from the minimum pixel value to the maximum pixel value found in a dataset. The dynamic range of the data is often several times smaller than the range of the pixel type in which it is stored. This can be computed by taking the difference of the maximum (VMAX) and the minimum (VMIN) value (plus one): RDynamic = VMAX – VMIN + 1. For example, 8 bit data with a minimum value of 1 and a maximum value of 254 would have a dynamic range of 254. 16 bit data with a minimum value of 1023 and a maximum value of 1278 would have a dynamic range of 256. If the dynamic range is less than the natural range of the data type then a smaller data type can used to store the data with a resulting savings in space. In the second case above the Dynamic Range of the data was 256 but the natural range of 16 bit data (two bytes) is 65536. In this case a single byte can be used to store the data by computing a compressed value (VCOMPRESSED) by subtracting the minimum value (VMIN) from the pixel value (VPIXEL). The data can then be saved with one value per byte (instead of one value per two bytes). To recover the data the minimum is added to the compressed value: VPIXEL = VCOMPRESSED +VMIN. This, of course, requires the minimum value (VMIN) to be saved along with the data.Run-Length Encoding is a compression technique based on the observation that often there are sequential occurrences (runs) of pixel values in an image. In this case, space can often be saved by counting the number of repeats (NRUN) of a value (VRUN) that occur in succession, and then storing the count and only one occurrence of the value. For example, if the pixel value 0 occurred 100 times in a row then the value (VRUN) would be 0 and the count (NRUN) would be 100. If a single byte is then used to store each of the count and the value, then 2 bytes would be used instead of 100. Run-Length Encoded data is stored as a sequence of pairs that consist of the count and the value (NRUN, VRUN).By first applying Dynamic Range compression and then Run-Length Encoding, a high degree of lossless compression can be obtained for many types of data. By operating on a block of data at a time the Dynamic Range compression can have a greater effect because the data within a block is often more similar in range than the data across the whole image. Note that it is possible to produce a greater amount of data when applying Dynamic Range Run-Length Encoding. Under these circumstances, the data will be stored as the original uncompressed block.The compressed data for each block is stored as follows:

Page 68: ERDAS Field Guide

38 Raster Data

ECW Compression Enhanced Compressed Wavelet (ECW) format significantly reduces the size of image files with minimal deterioration in quality.Wavelet compression technology offers very high quality results at high compression rates. You can typically compress a color image to less than 5% of its original size (20:1 compression ratio) and compress a grayscale image to less than 10% of its original size (10:1 compression ratio). This means that, at 20:1 compression, 10GB of color imagery compresses down to 500MB, which is small enough to fit on to a single CD-ROM. You may actually achieve higher compression rates where your source image has a structure well suited to compression.ECW compression is more efficient when it is used to compress large image files. The minimum size image that can be compressed by ECW method is 128 x 128 pixels.Because the compressed imagery is composed of multi-resolution wavelet levels, you can experience fast roaming and zooming on the imagery, even on slower media such as CD-ROM.

Name Type Description

Repeated only once per block at the front of the data stream

Min EMIF_T_LONG This is the minimum value observed in the block of data.

Numsegments EMIF_T_LONG This is the number of runs in the block.

Dataoffset EMIF_T_LONG This is the number of bytes after the start of this data that the segment data starts at.

Numbitspervalue EMIF_T_UCHAR This is the number of bits used per data value. It will be either 1, 2, 4, 8, 16, or 32

Segment contents repeated “numsegments” times for the block.

Countnumbytes EMIF_T_UCHAR This is the number of bytes for the count. It has the value 1, 2, 3 or 4.

Count[0] EMIF_T_UCHAR Present for countnumbytes=1,2,3,4

Count[1] EMIF_T_UCHAR Present for countnumbytes=2,3,4

Count[2] EMIF_T_UCHAR Present for countnumbytes=3,4

Count[3] EMIF_T_UCHAR Present for countnumbytes=4

Data[0] EMIF_T_UCHAR Present for numbitspervalue=1,2,4,8,16 or 32

Data[1] EMIF_T_UCHAR Present for numbitspervalue=16 or 32

Data[2] EMIF_T_UCHAR Present for numbitspervalue=32

Data[3] EMIF_T_UCHAR Present for numbitspervalue=32

Page 69: ERDAS Field Guide

Raster Data 39

In addition to reducing storage requirements, you can also use free imagery plug-ins for GIS and office applications to read compressed imagery with a wide range of software applications.

Specify Quality Level rather than Output File Size The concept of ECW and JPEG2000 format compression is that you are compressing to a specified quality level rather than a specified file size. Choose a level of quality that benefits your needs, and use the target compression ratio to compress images within that quality range. The goal is visual similarity in quality levels between multiple files, not similar file size.

ECW Compression RatiosWhen exporting to ECW images, you select a target compression ratio. This is a target only, and the actual amount of compression will vary depending on the image qualities and amount of spatial variation. Recommended values are 1 to 40 for color images and 1 to 25 for greyscale. Higher values give more file size compression, lower values give better image quality.

What is Target Compression RatioWhen compressing images there is a tradeoff between the degree of compression achieved and the quality of the resulting image when it is subsequently decoded. The highest rates of compression can only be achieved by discarding some less important data from the image, known as lossy decompression. The target compression ratio is an abstract figure representing your choice in this tradeoff. It approximates the likely ratio of input file size to output file size given certain parameters for compression.It is important to note that the target ratio makes no guarantees about the actual output size that will be achieved, because this is dependent on the nature of the input data. Images with certain features (for example, air photos showing large regions of a similar color like oceans or forests) are easier to compress than others (completely random images). However in typical cases the actual rate of compression obtained will be greater than the target rate.Except when compressing very small files (less than 2MB in size), the actual compression ratio achieved is often significantly larger than the target compression ratio set by the user. The reason for this is as follows.

Page 70: ERDAS Field Guide

40 Raster Data

Preserve Image QualityWhen you specify a Target Compression Ratio, the compression engine uses this value as a measure of how much information content to preserve in the image. If your image has areas that are conducive to compression (for example, desert areas or bodies of water), a greater rate of compression may be achieved while still keeping the desired information content and quality. The compression engine uses multiple wavelet encoding techniques simultaneously, and adapts the best techniques depending on the area being compressed. It is important to understand that encoding techniques are applied after image quantization and do not affect the quality, even though the compression ratio may be higher than that which was requested.

Page 71: ERDAS Field Guide

41Vector Data

Vector Data 41

Vector Data

Introduction ERDAS IMAGINE is designed to integrate two data types, raster and vector, into one system. While the previous chapter explored the characteristics of raster data, this chapter is focused on vector data. The vector data structure in ERDAS IMAGINE is based on the ArcInfo data model (developed by ESRI, Inc.). This chapter describes vector data, attribute information, and symbolization.

You do not need ArcInfo software or an ArcInfo license to use the vector capabilities in ERDAS IMAGINE. Since the ArcInfo data model is used in ERDAS IMAGINE, you can use ArcInfo coverages directly without importing them.

See "Geographic Information Systems" on page 173 for information on editing vector layers and using vector data in a GIS.

Vector data consist of:

• points

• lines

• polygons

Each is illustrated in Figure 16.

Figure 16: Vector Elements

node

node

vertices

polygons

label point

line

points

Page 72: ERDAS Field Guide

42 Vector Data

Points A point is represented by a single x, y coordinate pair. Points can represent the location of a geographic feature or a point that has no area, such as a mountain peak. Label points are also used to identify polygons (see Figure 17).

Lines A line (polyline) is a set of line segments and represents a linear geographic feature, such as a river, road, or utility line. Lines can also represent nongeographical boundaries, such as voting districts, school zones, contour lines, etc.

Polygons A polygon is a closed line or closed set of lines defining a homogeneous area, such as soil type, land use, or water body. Polygons can also be used to represent nongeographical features, such as wildlife habitats, state borders, commercial districts, etc. Polygons also contain label points that identify the polygon. The label point links the polygon to its attributes.

Vertex The points that define a line are vertices. A vertex is a point that defines an element, such as the endpoint of a line segment or a location in a polygon where the line segment defining the polygon changes direction. The ending points of a line are called nodes. Each line has two nodes: a from-node and a to-node. The from-node is the first vertex in a line. The to-node is the last vertex in a line. Lines join other lines only at nodes. A series of lines in which the from-node of the first line joins the to-node of the last line is a polygon.

Figure 17: Vertices

In Figure 17, the line and the polygon are each defined by three vertices.

Coordinates Vector data are expressed by the coordinates of vertices. The vertices that define each element are referenced with x, y, or Cartesian, coordinates. In some instances, those coordinates may be inches [as in some computer-aided design (CAD) applications], but often the coordinates are map coordinates, such as State Plane, Universal Transverse Mercator (UTM), or Lambert Conformal Conic. Vector data digitized from an ungeoreferenced image are expressed in file coordinates.

line polygon

vertices

label point

Page 73: ERDAS Field Guide

Vector Data 43

TicsVector layers are referenced to coordinates or a map projection system using tic files that contain geographic control points for the layer. Every vector layer must have a tic file. Tics are not topologically linked to other features in the layer and do not have descriptive data associated with them.

Vector Layers Although it is possible to have points, lines, and polygons in a single layer, a layer typically consists of one type of feature. It is possible to have one vector layer for streams (lines) and another layer for parcels (polygons). A vector layer is defined as a set of features where each feature has a location (defined by coordinates and topological pointers to other features) and, possibly attributes (defined as a set of named items or variables) (ESRI 1989). Vector layers contain both the vector features (points, lines, polygons) and the attribute information.Usually, vector layers are also divided by the type of information they represent. This enables the user to isolate data into themes, similar to the themes used in raster layers. Political districts and soil types would probably be in separate layers, even though both are represented with polygons. If the project requires that the coincidence of features in two or more layers be studied, the user can overlay them or create a new layer.

See "Geographic Information Systems" on page 173 for more information about analyzing vector layers.

Topology The spatial relationships between features in a vector layer are defined using topology. In topological vector data, a mathematical procedure is used to define connections between features, identify adjacent polygons, and define a feature as a set of other features (e.g., a polygon is made of connecting lines) (Environmental Systems Research Institute, 1990).Topology is not automatically created when a vector layer is created. It must be added later using specific functions. Topology must be updated after a layer is edited also.

Digitizing on page 49 describes how topology is created for a new or edited vector layer.

Vector Files As mentioned above, the ERDAS IMAGINE vector structure is based on the ArcInfo data model used for ARC coverages. This georelational data model is actually a set of files using the computer’s operating system for file management and input/output. An ERDAS IMAGINE vector layer is stored in subdirectories on the disk. Vector data are represented by a set of logical tables of information, stored as files within the subdirectory. These files may serve the following purposes:

Page 74: ERDAS Field Guide

44 Vector Data

• define features

• provide feature attributes

• cross-reference feature definition files

• provide descriptive information for the coverage as a whole

A workspace is a location that contains one or more vector layers. Workspaces provide a convenient means for organizing layers into related groups. They also provide a place for the storage of tabular data not directly tied to a particular layer. Each workspace is completely independent. It is possible to have an unlimited number of workspaces and an unlimited number of vector layers in a workspace. Table 1 summarizes the types of files that are used to make up vector layers.

Figure 18 illustrates how a typical vector workspace is set up (Environmental Systems Research Institute, 1992).

Table 1: Description of File Types

File Type File Description

Feature Definition Files

ARC Line coordinates and topology

CNT Polygon centroid coordinates

LAB Label point coordinates and topology

TIC Tic coordinates

Feature Attribute Files

AAT Line (arc) attribute table

PAT Polygon or point attribute table

Feature Cross-Reference File

PAL Polygon/line/node cross-reference file

Layer Description Files

BND Coordinate extremes

LOG Layer history file

PRJ Coordinate definition file

TOL Layer tolerance file

Page 75: ERDAS Field Guide

Vector Data 45

Figure 18: Workspace Structure

Because vector layers are stored in directories rather than in simple files, you MUST use the utilities provided in ERDAS IMAGINE to copy and rename them. A utility is also provided to update path names that are no longer correct due to the use of regular system commands on vector layers.

See the ESRI documentation for more detailed information about the different vector files.

Attribute Information

Along with points, lines, and polygons, a vector layer can have a wealth of associated descriptive, or attribute, information associated with it. Attribute information is displayed in CellArrays. This is the same information that is stored in the INFO database of ArcInfo. Some attributes are automatically generated when the layer is created. Custom fields can be added to each attribute table. Attribute fields can contain numerical or character data. The attributes for a roads layer may look similar to the example in Figure 19. You can select features in the layer based on the attribute information. Likewise, when a row is selected in the attribute CellArray, that feature is highlighted in the Viewer.

georgia

parcels testdata

demo INFO roads streets

Page 76: ERDAS Field Guide

46 Vector Data

Figure 19: Attribute CellArray

Using Imported Attribute DataWhen external data types are imported into ERDAS IMAGINE, only the required attribute information is imported into the attribute tables (AAT and PAT files) of the new vector layer. The rest of the attribute information is written to one of the following INFO files:

• <layer name>.ACODE—arc attribute information

• <layer name>.PCODE—polygon attribute information

• <layer name>.XCODE—point attribute information

To utilize all of this attribute information, the INFO files can be merged into the PAT and AAT files. Once this attribute information has been merged, it can be viewed in CellArrays and edited as desired. This new information can then be exported back to its original format. The complete path of the file must be specified when establishing an INFO file name in a Viewer application, such as exporting attributes or merging attributes, as shown in the following example:

/georgia/parcels/info!arc!parcels.pcode

Use the Show Attributes option in the IMAGINE Workspace to view and manipulate vector attribute data, including merging and exporting.

See the ERDAS IMAGINE On-Line Help for more information about using CellArrays.

Page 77: ERDAS Field Guide

Vector Data 47

Displaying Vector Data

Vector data are displayed in Viewers, as are other data types in ERDAS IMAGINE. You can display a single vector layer, overlay several layers in one Viewer, or display a vector layer(s) over a raster layer(s).In layers that contain more than one feature (a combination of points, lines, and polygons), you can select which features to display. For example, if you are studying parcels, you may want to display only the polygons in a layer that also contains street centerlines (lines).

Color Schemes Vector data are usually assigned class values in the same manner as the pixels in a thematic raster file. These class values correspond to different colors on the display screen. As with a pseudo color image, you can assign a color scheme for displaying the vector classes.

See "Image Display" on page 145 for a thorough discussion of how images are displayed.

Symbolization Vector layers can be displayed with symbolization, meaning that the attributes can be used to determine how points, lines, and polygons are rendered. Points, lines, polygons, and nodes are symbolized using styles and symbols similar to annotation. For example, if a point layer represents cities and towns, the appropriate symbol could be used at each point based on the population of that area.

PointsPoint symbolization options include symbol, size, and color. The symbols available are the same symbols available for annotation.

LinesLines can be symbolized with varying line patterns, composition, width, and color. The line styles available are the same as those available for annotation.

PolygonsPolygons can be symbolized as lines or as filled polygons. Polygons symbolized as lines can have varying line styles (see Lines on page 47). For filled polygons, either a solid fill color or a repeated symbol can be selected. When symbols are used, you select the symbol to use, the symbol size, symbol color, background color, and the x- and y-separation between symbols. Figure 20 illustrates a pattern fill.

Page 78: ERDAS Field Guide

48 Vector Data

Figure 20: Symbolization Example

See the On-Line Help for information about selecting features and using CellArrays.

Vector Data Sources

Vector data are created by:

• tablet digitizing—maps, photographs, or other hardcopy data can be digitized using a digitizing tablet

• screen digitizing—create new vector layers by using the mouse to digitize on the screen

• using other software packages—many external vector data types can be converted to ERDAS IMAGINE vector layers

• converting raster layers—raster layers can be converted to vector layers

Each of these options is discussed in a separate section.

The vector layer reflectsthe symbolization that is defined in the Symbology dialog.

Page 79: ERDAS Field Guide

Vector Data 49

Digitizing In the broadest sense, digitizing refers to any process that converts nondigital data into numbers. However, in ERDAS IMAGINE, the digitizing of vectors refers to the creation of vector data from hardcopy materials or raster images that are traced using a digitizer keypad on a digitizing tablet or a mouse on a displayed image. Any image not already in digital format must be digitized before it can be read by the computer and incorporated into the database. Most Landsat, SPOT, or other satellite data are already in digital format upon receipt, so it is not necessary to digitize them. However, you may also have maps, photographs, or other nondigital data that contain information you want to incorporate into the study. Or, you may want to extract certain features from a digital image to include in a vector layer. Tablet digitizing and screen digitizing enable you to digitize certain features of a map or photograph, such as roads, bodies of water, voting districts, and so forth.

Tablet Digitizing Tablet digitizing involves the use of a digitizing tablet to transfer nondigital data such as maps or photographs to vector format. The digitizing tablet contains an internal electronic grid that transmits data to ERDAS IMAGINE on cue from a digitizer keypad operated by you.

Figure 21: Digitizing Tablet

Digitizer SetupThe map or photograph to be digitized is secured on the tablet, and a coordinate system is established with a setup procedure.

Digitizer OperationThe handheld digitizer keypad features a small window with a crosshair and keypad buttons. Position the intersection of the crosshair directly over the point to be digitized. Depending on the type of equipment and the program being used, one of the input buttons is pushed to tell the system which function to perform, such as:

• digitize a point (i.e., transmit map coordinate data),

• connect a point to previous points,

Page 80: ERDAS Field Guide

50 Vector Data

• assign a particular value to the point or polygon, or

• measure the distance between points, etc.

Move the puck along the desired polygon boundaries or lines, digitizing points at appropriate intervals (where lines curve or change direction), until all the points are collected.

Newly created vector layers do not contain topological data. You must create topology using the Build or Clean options. This is discussed further in "Geographic Information Systems" on page 173.

Digitizing Modes There are two modes used in digitizing:

• point mode—one point is generated each time a keypad button is pressed

• stream mode—points are generated continuously at specified intervals, while the puck is in proximity to the surface of the digitizing tablet

You can create a new vector layer from the Viewer. Select the Tablet Input function from the Viewer to use a digitizing tablet to enter new information into that layer.

Measurement The digitizing tablet can also be used to measure both linear and areal distances on a map or photograph. The digitizer puck is used to outline the areas to measure. You can measure:

• lengths and angles by drawing a line

• perimeters and areas using a polygonal, rectangular, or elliptical shape

• positions by specifying a particular point

Measurements can be saved to a file, printed, and copied. These operations can also be performed with screen digitizing.

Select the Measure function from the Viewer or click on the Ruler tool in the Viewer tool bar to enable tablet or screen measurement.

Page 81: ERDAS Field Guide

Vector Data 51

Screen Digitizing In screen digitizing, vector data are drawn with a mouse in the Viewer using the displayed image as a reference. These data are then written to a vector layer. Screen digitizing is used for the same purposes as tablet digitizing, such as:

• digitizing roads, bodies of water, political boundaries

• selecting training samples for input to the classification programs

• outlining an area of interest for any number of applications

Create a new vector layer from the Viewer.

Imported Vector Data

Many types of vector data from other software packages can be incorporated into the ERDAS IMAGINE system. These data formats include:

• ArcInfo GENERATE format files from ESRI, Inc.

• ArcInfo INTERCHANGE files from ESRI, Inc.

• ArcView Shapefiles from ESRI, Inc.

• Digital Line Graphs (DLG) from U.S.G.S.

• Digital Exchange Files (DXF) from Autodesk, Inc.

• ETAK MapBase files from ETAK, Inc.

• Initial Graphics Exchange Standard (IGES) files

• Intergraph Design (DGN) files from Intergraph

• Spatial Data Transfer Standard (SDTS) vector files

• Topologically Integrated Geographic Encoding and Referencing System (TIGER) files from the U.S. Census Bureau

• Vector Product Format (VPF) files from the Defense Mapping Agency

See "Raster and Vector Data Sources" on page 55 for more information on these data.

Page 82: ERDAS Field Guide

52 Vector Data

Raster to Vector Conversion

A raster layer can be converted to a vector layer and used as another layer in a vector database. The following diagram illustrates a thematic file in raster format that has been converted to vector format.

Figure 22: Raster Format Converted to Vector Format

Most commonly, thematic raster data rather than continuous data are converted to vector format, since converting continuous layers may create more vector features than are practical or even manageable.

Convert vector data to raster data, and vice versa, using IMAGINE Vector™.

Other Vector Data Types

While this chapter has focused mainly on the ArcInfo coverage format, there are other types of vector formats that you can use in ERDAS IMAGINE. The two primary types are:

• shapefile

• Spatial Database Engine (SDE)

Shapefile Vector Format The shapefile vector format was designed by ESRI. You can now use shapefile format (extension .shp) in ERDAS IMAGINE. You can now:

• display shapefiles

• create shapefiles

• edit shapefiles

Raster soils layer Soils layer converted to vector polygon layer

Page 83: ERDAS Field Guide

Vector Data 53

• attribute shapefiles

• symbolize shapefiles

• print shapefiles

The shapefile contains spatial data, such as boundary information.

SDE Like the shapefile format, the Spatial Database Engine (SDE) is a vector format designed by ESRI. The data layers are stored in a relational database management system (RDBMS) such as Oracle, or SQL Server. Some of the features of SDE include:

• storage of large, untiled spatial layers for fast retrieval

• powerful and flexible query capabilities using the SQL where clause

• operation in a client-server environment

• multiuser access to the data

ERDAS IMAGINE has the capability to act as a client to access SDE vector layers stored in a database. To do this, it uses a wizard interface to connect ERDAS IMAGINE to a SDE database, and selects one of the vector layers. Additionally, it can join business tables with the vector layer, and generate a subset of features by imposing attribute constraints (e.g., SQL where clause). The definition of the vector layer as extracted from a SDE database is stored in a <layername>.sdv file, and can be loaded as a regular ERDAS IMAGINE data file. ERDAS IMAGINE supports the SDE projection systems. Currently, ERDAS IMAGINE’s SDE capability is read-only. For example, features can be queried and AOIs can be created, but not edited.

SDTS SDTS stands for Spatial Data Transfer Standard. SDTS is used to transfer spatial data between computer systems. Such data includes attribute, georeferencing, data quality report, data dictionary, and supporting metadata.According to the USGS, the

implementation of SDTS is of significant interest to users and producers of digital spatial data because of the potential for increased access to and sharing of spatial data, the reduction of information loss in data exchange, the elimination of the duplication of data acquisition, and the increase in the quality and integrity of spatial data (United States Geological Survey, 1999c).

Page 84: ERDAS Field Guide

54 Vector Data

The components of SDTS are broken down into six parts. The first three parts are related, but independent, and are concerned with the transfer of spatial data. The last three parts provide definitions for rules and formats for applying SDTS to the exchange of data. The parts of SDTS are as follows:

• Part 1—Logical Specifications

• Part 2—Spatial Features

• Part 3—ISO 8211 Encoding

• Part 4—Topological Vector Profile

• Part 5—Raster Profile

• Part 6—Point Profile

ArcGIS Integration ArcGIS Integration is the method you use to access the data in a geodatabase. The term geodatabase is the short form of geographic database. The geodatabase is hosted inside of a regional database management system that provides services for managing geographic data. The services include validation rules, relationships, and topological associations. ERDAS IMAGINE has always supported ESRI data formats such as coverages and shapefiles, and now, using ArcGIS Vector Integration, ERDAS IMAGINE can also access CAD and VPF data on the internet.There are two types of geodatabases: personal and enterprise. The personal geodatabases are for use by an individual or small group, and the enterprise geodatabases are for use by large groups. Industrial strength host systems such as Oracle support the organizational structure of enterprise geodatabases. The organization of both personal and enterprise geodatabases starts with a workspace that contains both spatial and non-spatial datasets such as feature classes, raster datasets, and tables. An example of a feature dataset would be U.S. Agriculture. Within the datasets are feature classes. An example of a feature class would be U.S. Hydrology. Within every feature class are particular features like wells and lakes. Each feature class will be symbolized by only one type of geometry such as points symbolizing wells or polygons symbolizing lakes. It is important to remember when you delete a personal database connection, the entire database is deleted from disk. When you delete a database connection on an enterprise database, only the connection is broken, and nothing in the geodatabase is deleted.

Page 85: ERDAS Field Guide

55Raster and Vector Data Sources

Raster and Vector Data Sources 55

Raster and Vector Data Sources

Introduction This chapter is an introduction to the most common raster and vector data types that can be used with the ERDAS IMAGINE software package. The raster data types covered include:

• visible/infrared satellite data

• radar imagery

• airborne sensor data

• scanned or digitized maps and photographs

• digital terrain models (DTMs)

The vector data types covered include:

• ArcInfo GENERATE format files

• AutoCAD Digital Exchange Files (DXF)

• United States Geological Survey (USGS) Digital Line Graphs (DLG)

• MapBase digital street network files (ETAK)

• U.S. Department of Commerce Initial Graphics Exchange Standard files (IGES)

• U.S. Census Bureau Topologically Integrated Geographic Encoding and Referencing System files (TIGER)

Importing and Exporting

Raster Data There is an abundance of data available for use in GIS today. In addition to satellite and airborne imagery, raster data sources include digital x-rays, sonar, microscopic imagery, video digitized data, and many other sources. Because of the wide variety of data formats, ERDAS IMAGINE provides two options for importing data:

• import for specific formats

• generic import for general formats

Page 86: ERDAS Field Guide

56 Raster and Vector Data Sources

ImportTable 2 lists some of the raster data formats that can be imported to, exported from, directly read from, and directly written to ERDAS IMAGINE.

There is a distinct difference between import and direct read. Import means that the data is converted from its original format into another format (for example, IMG, TIFF, or GRID Stack), which can be read directly by ERDAS IMAGINE. Direct read formats are those formats which the Viewer and many of its associated tools can read immediately without any conversion process.

NOTE: Annotation and Vector data formats are listed separately.

Table 2: Raster Data Formats

Data Type Import Export Direct Read

Direct Write

ADRG • •

ADRI •

Alaska SAR Facility (.L) •

Algorithm (.alg) IMAGINE version 2010

ALOS AVNIR-2 JAXA CEOS •

ALOS PRISM JAXA CEOS •

ALOS PRISM JAXA CEOS IMG

ALOS Palsar ERSDAC CEOS •

ALOS Palsar ERSDAC VEXCEL

ALOS Palsar JAXA CEOS •

ARCGEN • •

Arc Coverage • •

ArcInfo & Space Imaging BIL, BIP, BSQ

• • •

ASCII Raster •

ASRP • •

ASTER (EOS HDF Format) •

AVHRR (NOAA) •

Page 87: ERDAS Field Guide

Raster and Vector Data Sources 57

AVHRR (Dundee Format) •

AVHRR (Sharp) •

BigGeoTIFF •

BigTIFF • •

BigTIFF Chip from BigTIFF •

BIL, BIP, BSQa (Generic Binary)

• • •b

Bitmap •

CADRG (Compressed ADRG) • • •

CIB (Controlled Image Base) • • •

COSMO-SkyMed •

DAEDALUS •

USGS DEM • •

DigitalGlobe TIL •

DOQ • •

DOQ (JPEG) • •

DTED • • •

ECW •

ENVI (.hdr) •

Envisat (.N1*) •

ER Mapper •

EROS-A, EROS-B •

ERS (I-PAF CEOS) •

ERS (Conae-PAF CEOS) •

ERS (Tel Aviv-PAF CEOS) •

ERS (D-PAF CEOS) •

ERS (UK-PAF CEOS) •

FIT •

FORMOSAT DIMAP (.dim) •

Generic Binary (BIL, BIP, BSQ)a

• • •b

GeoEye-1 •

GeoPDF •

Table 2: Raster Data Formats

Data Type Import Export Direct Read

Direct Write

Page 88: ERDAS Field Guide

58 Raster and Vector Data Sources

GeoTIFF • • • •

GIF (.gif) •

GIS (Erdas 7.x) • • •

GRASS • •

GRID • • •

GRID Stack • • • •

GRID Stack 7.x • • •

GRD (Surfer: ASCII/Binary) • •

HDF

HYDICE (.cub) •

IMAGINE (.img) • •

Image Web Server ECWP (.url) •

Intergraph CCITT Group 4 •

Intergraph COT •

Intergraph ISAT • •

IRS-1C/1D (EOSAT Fast Format C)

IRS-1C/1D(EUROMAP Fast Format C)

IRS-1C/1D (Super Structured Format)

JFIF (JPEG) • • •

JPEG2000 • •

Landsat-7 Fast-L7A ACRES •

Landsat-7 Fast-L7A EROS •

Landsat-7 Fast-L7A Eurimage •

LAN (Erdas 7.x) • • •

Layout file (.ixw) version IMAGINE 2010

Map composition (.map) •

MODIS (EOS HDF Format) •

MrSID • •

MSS Landsat •

MultiGen OpenFlight FLT •

Table 2: Raster Data Formats

Data Type Import Export Direct Read

Direct Write

Page 89: ERDAS Field Guide

Raster and Vector Data Sources 59

NASDA CEOS •

NITF 1.1 • •

NITF 2.x • • •

NSIF • •

NLAPS Data Format (NDF) •

ORACLE Spatial GeoRaster (.ogr)

PCIDSK (.pix)

PCX • • •

PNG (.png) •

QuickBird •

RADARSAT (Acres CEOS) •

RADARSAT (JAXA CEOS) •

RADARSAT (West Freugh CEOS)

RADARSAT (Vancouver CEOS)

RADARSAT-2 •

Raster Product Format • • •

RAW (.raw) •

SDE • •

SDE Raster (.sdi) • •

SDTS • •

SeaWiFS L1B and L2A (OrbView) and HDF

Session file (.ixs) version IMAGINE 2010

ShoeBox file (.ixp) version IMAGINE 2010

SOCET SET Support (.sup) -LPS required

SPOT (CAP/SPIM) •

SPOT CCRS •

SPOT DIMAP (.dim) •

SPOT Fast Format •

Table 2: Raster Data Formats

Data Type Import Export Direct Read

Direct Write

Page 90: ERDAS Field Guide

60 Raster and Vector Data Sources

SPOT (GeoSpot) •

SPOT (NASDA CAP) •

SPOT SICORP MetroView •

Sub-Image (.sbi) •

SUN Raster • •

Surfer Grid (.grd) •

TARGA (.tga)

TerraSAR-X (TSX1*.xml) •

THEOS DIMAP (.dim) •

TIFF • • • •

TIL (DigitalGlobe) •

TM Landsat Acres Fast Format •

TM Landsat Acres Standard Format

TM Landsat EOSAT Fast Format

TM Landsat EOSAT Standard Format

TM Landsat ESA Fast Format •

TM Landsat ESA Standard Format

TM Landsat-7 Eurimage CEOS (Multispectral)

TM Landsat-7 Eurimage CEOS (Panchromatic)

TM Landsat-7 HDF Format •

TM Landsat IRS Fast Format •

TM Landsat IRS Standard Format

TM Landsat-7 Fast-L7A ACRES

TM Landsat-7 Fast-L7A EROS •

TM Landsat-7 Fast-L7A Eurimage

TM Landsat Radarsat Fast Format

Table 2: Raster Data Formats

Data Type Import Export Direct Read

Direct Write

Page 91: ERDAS Field Guide

Raster and Vector Data Sources 61

aSee Generic Binary Data on page 63.

bDirect read of generic binary data requires an accompanying header file in the ESRI ArcInfo, Space Imaging, or ERDAS IMAGINE formats.

The import function converts raster data to the ERDAS IMAGINE file format (.img), or other formats directly writable by ERDAS IMAGINE. The import function imports the data file values that make up the raster image, as well as the ephemeris or additional data inherent to the data structure. For example, when the user imports Landsat data, ERDAS IMAGINE also imports the georeferencing data for the image.

Raster data formats cannot be exported as vector data formats unless they are converted with the Vector utilities.

Each direct function is programmed specifically for that type of data and cannot be used to import other data types.

TM Landsat Radarsat Standard Format

Unrestricted Access Image (.uai)

USRP • •

VEXCEL SLC (PASL*.SLC) •

VITec (.vit) •

View (.vue) •

Virtual Mosaic (.vmc) •

Virtual Stack (.vsk) •

Web Coverage Service proxy (.wcs)

Web Map Service proxy (.wms) •

WorldView-1, WorldView-2 •

Table 2: Raster Data Formats

Data Type Import Export Direct Read

Direct Write

Page 92: ERDAS Field Guide

62 Raster and Vector Data Sources

Raster Data Sources

NITFSNITFS stands for the National Imagery Transmission Format Standard. NITFS is designed to pack numerous image compositions with complete annotation, text attachments, and imagery-associated metadata.

Statistics and pyramid layers (.rrd files) are not created in export functions. These files are created when imported into IMAGINE, and the export function is used when this data is exported out of IMAGINE. If you need statistics and pyramid layers, please use the Image Command tool.

According to Jordan and Beck,NITFS is an unclassified format that is based on ISO/IEC 12087-5, Basic Image Interchange Format (BIIF). The NITFS implementation of BIIF is documented in U.S. Military Standard 2500B, establishing a standard data format for digital imagery and imagery-related products.

NITFS was first introduced in 1990 and was for use by the government and intelligence agencies. NITFS is now the standard for military organizations as well as commercial industries.Jordan and Beck list the following attributes of NITF files:

• provide a common basis for storage and digital interchange of images and associated data among existing and future systems

• support interoperability by simultaneously providing a data format for shared access applications while also serving as a standard message format for dissemination of images and associated data (text, symbols, labels) via digital communications

• require minimal preprocessing and post-processing of transmitted data

• support variable image sizes and resolution

• minimize formatting overhead, particularly for those users transmitting only a small amount of data or with limited bandwidth

• provide universal features and functions without requiring commonality of hardware or proprietary software

Moreover, NITF files support the following:

Page 93: ERDAS Field Guide

Raster and Vector Data Sources 63

• multiple images

• annotation on images

• ASCII text files to accompany imagery and annotation

• metadata to go with imagery, annotation and text

The process of translating NITFS files is a cross-translation process. One system’s internal representation for the files and their associated data is processed and put into the NITF format. The receiving system reformats the NITF file, and converts it for the receiving systems internal representation of the files and associated data.In ERDAS IMAGINE, the IMAGINE NITF™ software accepts such information and assembles it into one file in the standard NITF format. Source: Jordan and Beck, 1999

Annotation Data Annotation data can also be imported directly. Table 3: “Annotation Data Formats” lists the Annotation formats.There is a distinct difference between import and direct read. Import means that the data is converted from its original format into another format (for example, IMG, TIFF, or GRID Stack), which can be read directly by ERDAS IMAGINE. Direct read formats are those formats which the Viewer and many of its associated tools can read immediately without any conversion process.

Generic Binary Data The Generic Binary import option is a flexible program which enables the user to define the data structure for ERDAS IMAGINE. This program allows the import of BIL, BIP, and BSQ data that are stored in left to right, top to bottom row order. Data formats from unsigned 1-bit up to 64-bit floating point can be imported. This program imports only the data file values—it does not import ephemeris data, such as georeferencing information. However, this ephemeris data can be viewed using the Data View option (from the Utility menu or the Import dialog).

Table 3: Annotation Data Formats

Data Type Import Export Direct Read

Direct Write

Annotation (.ovr) • •

ANT (Erdas 7.x) • •

AOI (Area of Interest) (.aoi) • •

ASCII To Point Annotation •

DXF To Annotation •

Page 94: ERDAS Field Guide

64 Raster and Vector Data Sources

Complex data cannot be imported using this program; however, they can be imported as two real images and then combined into one complex image using the Spatial Modeler.

You cannot import tiled or compressed data using the Generic Binary import option.

Vector Data Vector layers can be created within ERDAS IMAGINE by digitizing points, lines, and polygons using a digitizing tablet or the computer screen. Several vector data types, which are available from a variety of government agencies and private companies, can also be imported. Table 4: “Vector Data Formats” lists some of the vector data formats that can be imported to, and exported from, ERDAS IMAGINE:

There is a distinct difference between import and direct read. Import means that the data is converted from its original format into another format (for example, IMG, TIFF, or GRID Stack), which can be read directly by ERDAS IMAGINE. Direct read formats are those formats which the Viewer and many of its associated tools can read immediately without any conversion process.

Table 4: Vector Data Formats

Data Type Import Export Direct Read

Direct Write

ARCGEN • •

ArcGIS Geodatabase (.gbd) •

Arc Interchange • •

Arc_Interchange to Coverage

Arc_Interchange to Grid •

ASCII To Point Coverage •

Coverage to DXF •

Coverage to Arc_Interchange

DFAD • •

DGN (Intergraph IGDS) •

DIG Files (Erdas 7.x) •

Page 95: ERDAS Field Guide

Raster and Vector Data Sources 65

Once imported, the vector data are automatically converted to ERDAS IMAGINE vector layers.

These vector formats are discussed in more detail in Vector Data from Other Software Vendors on page 138. See "Vector Data" on page 41 for more information on ERDAS IMAGINE vector layers.

Import and export vector data with the Import/Export function. You can also convert vector layers to raster format, and vice versa, with the IMAGINE Vector utilities.

DLG • •

DXF to Annotation •

DXF to Coverage •

ETAK •

IGDS (Intergraph .dgn File) •

IGES • •

MIF/MID (MapInfo) to Coverage

ORACLE Spatial Feature (.ogv)

SDE • •

SDTS • •

Shapefile • •

Terramodel •

TIGER • •

VirtualGIS TIN Mesh •

VirtualGIS TIN World •

VPF • •

Table 4: Vector Data Formats

Data Type Import Export Direct Read

Direct Write

Page 96: ERDAS Field Guide

66 Raster and Vector Data Sources

Optical Satellite Data

There are several data acquisition options available including photography, aerial sensors, and sophisticated satellite scanners. However, a satellite system offers these advantages:

• Digital data gathered by a satellite sensor can be transmitted over radio or microwave communications links and stored on DVDs, CDs, or magnetic tapes, so they are easily processed and analyzed by a computer.

• Many satellites orbit the Earth, so the same area can be covered on a regular basis for change detection.

• Once the satellite is launched, the cost for data acquisition is less than that for aircraft data.

• Satellites have very stable geometry, meaning that there is less chance for distortion or skew in the final image.

There are two types of satellite data access: direct access to many raster data formats for the use of files in their native format, and the Import and Export functions for data exchange.

Satellite System A satellite system is composed of a scanner with sensors and a satellite platform. The sensors are made up of detectors.

• The scanner is the entire data acquisition system, such as the Landsat TM scanner or the SPOT panchromatic scanner (Lillesand and Kiefer, 1987). It includes the sensor and the detectors.

• A sensor is a device that gathers energy, converts it to a signal, and presents it in a form suitable for obtaining information about the environment (Colwell, 1983).

• A detector is the device in a sensor system that records electromagnetic radiation. For example, in the sensor system on the Landsat TM scanner there are 16 detectors for each wavelength band (except band 6, which has 4 detectors).

In a satellite system, the total width of the area on the ground covered by the scanner is called the swath width, or width of the total field of view (FOV). FOV differs from IFOV in that the IFOV is a measure of the field of view of each detector. The FOV is a measure of the field of view of all the detectors combined.

Page 97: ERDAS Field Guide

Raster and Vector Data Sources 67

Satellite Characteristics The U. S. Landsat and the French SPOT satellites are two important data acquisition satellites. These systems provide the majority of remotely-sensed digital images in use today. The Landsat and SPOT satellites have several characteristics in common:

• Both scanners can produce nadir views. Nadir is the area on the ground directly beneath the scanner’s detectors.

• They have sun-synchronous orbits, meaning that they rotate around the Earth at the same rate as the Earth rotates on its axis, so data are always collected at the same local time of day over the same region.

• They both record electromagnetic radiation in one or more bands. Multiband data are referred to as multispectral imagery. Single band, or monochrome, imagery is called panchromatic.

NOTE: The current SPOT system has the ability to collect off-nadir stereo imagery.

Image Data ComparisonFigure 23 shows a comparison of the electromagnetic spectrum recorded by Landsat TM, Landsat MSS, SPOT, and National Oceanic and Atmospheric Administration (NOAA) AVHRR data. These data are described in detail in the following sections.

Page 98: ERDAS Field Guide

68 Raster and Vector Data Sources

Figure 23: Multispectral Imagery Comparison

ALOS Advanced Land Observing Satellite mission (ALOS) is a project operated by the Japan Aerospace Exploration Agency (JAXA). ALOS was launched from the Tanegashima Space Center in Japan in 2006.Used for cartography, regional observation, disaster monitoring, and resource surveying, ALOS enhances the land observing technology of its predecessors JERS-1 and ADEOS.ALOS orbits at an altitude of 691 kilometers at an inclination of 98 degrees. The orbit is sun-synchronous sub-recurrent, and the repeat cycle is 46 days with a sub cycle of 2 days.

10.09.08.0

6.05.0

3.03.5

2.52.6

4.0

7.0

12.013.0

11.0

.6

.5

.8

1.4

1.2

.7

.91.01.1

1.3

1.51.61.71.81.92.02.12.22.32.4

(1,2,3,4) (4,5,7)SPOT XS

SPOT Pan

NOAAAVHRR

Band 1Band 2Band 3

Band 4

Band 1Band 2Band 3

Band 4

Band 5

Band 7

Band 1Band 2

Band 3

Band 1 Band 1

Band 2

Band 3B

Band 5Band 4

Band 6

mic

rom

eter

s

Band 0

Band 3A

Landsat MSS Landsat TM

Page 99: ERDAS Field Guide

Raster and Vector Data Sources 69

ALOS has three remote-sensing instruments: the Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM) for digital elevation mapping, the Advanced Visible and Near Infrared Radiometer type 2 (AVNIR-2) for land coverage observation, and the Phased Array type L-band Synthetic Aperture Radar (PALSAR) for all-weather, day/night land observation.Source: Japan Aerospace Exploration Agency, 2003.

Each of the three remote-sensing instruments is discussed in the ALOS AVNIR-2, ALOS PALSAR, and ALOS PRISM sections.

ALOS AVNIR-2AVNIR-2, Advanced Visible and Near Infrared Radiometer type 2, is a visible and near infrared radiometer on board the ALOS satellite mission, launched in 2006. The AVNIR-2 provides better spatial land-coverage maps and land-use classification maps for monitoring regional environments.

Source: Japan Aerospace Exploration Agency, 2007.

ALOS PRISMPRISM, Panchromatic Remote-sensing Instrument for Stereo Mapping, is a panchromatic radiometer on board the ALOS satellite mission, launched in 2006. The radiometer has 2.5m spatial resolution at nadir and its extracted data provides digital surface models.PRISM has three independent optical systems for viewing nadir, forward, and backward, producing a stereoscopic image along the satellite’s track. The nadir-viewing telescope covers a width of 70 km, and the forward and backward viewing telescopes each cover 35 km.

Table 5: AVNIR-2 Sensor Characteristics

Number of Bands 4

Wavelength Band 1: 0.42 to 0.50 μmBand 2: 0.52 to 0.60 μmBand 3: 0.61 to 0.69 μmBand 4: 0.76 to 0.89 μm

Spatial Resolution 10 m (at Nadir)

Swath Width 70 km (at Nadir)

Number of Detectors 7000 per band

Pointing Angle - 44 to + 44 degrees

Bit Length 8 bits

Page 100: ERDAS Field Guide

70 Raster and Vector Data Sources

PRISM’s wide field of view (FOV) provides three fully overlapped stereo images of a 35 km width without mechanical scanning or yaw steering of the satellite.

Source: Japan Aerospace Exploration Agency, 2003c.

ASTER ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) is an instrument flying on Terra, a satellite launched in December 1999 as part of NASA’s Earth Observing System (EOS). ASTER is a cooperative effort between NASA, Japan’s Ministry of Economy, Trade and Industry (METI), and Japan’s Earth Remote Sensing Data Analysis Center (ERSDAC). Compared with the Landsat Thematic Mapper and Japan’s JERS-1 OPS scanner, the ASTER instrument is the next generation in remote sensing imaging.ASTER captures high resolution data in the visible to thermal infrared wavelength spectrum and provides stereo viewing capability for DEM creation.The ASTER instrument consists of three subsystems: Visible and Near Infrared (VNIR), Shortwave Infrared (SWIR), and Thermal Infrared (TIR).

Table 6: PRISM Sensor Characteristics

Number of Bands 1 (panchromatic)

Wavelength 0.52 to 0.77 μm

Number of Optics 3 (Nadir, Forward, Backward)

Base-to-Height Ratio 1.0 (between Forward and Backward view)

Spatial Resolution 2.5 m (at Nadir)

Swath Width 70 km (Nadir only)35 km (Triplet mode)

Pointing Angle -1.5 to +1.5 degrees(Triplet mode, Cross-track direction)

Bit Length 8 bits

Table 7: ASTER Characteristics

Characterisic VNIR SWIR TIR

Spectral RangeWavelengths in

microns

Band 10.52 - 0.60

Band 41.68 - 1.70

Band 108.125 - 8.475

Band 20.63 - 0.69

Band 52.145 - 2.185

Band 118.475 - 8.825

Band 30.76 - 0.86

Band 62.185 - 2.225

Band 128/925 - 9.275

Page 101: ERDAS Field Guide

Raster and Vector Data Sources 71

Source: National Aeronautics and Space Administration, 2004.

EROS A and EROS B The first Earth Remote Observation Satellite (EROS A), launched in December 2000, was developed by ImageSat International N.V. They subsequently launched their second satellite, EROS B, in April 2006. ImageSat International N.V. is a Netherlands Antilles company with offices in Cyprus and Israel.EROS A imaging techniques offer panchromatic images in basic type and as stereo pairs. EROS B imaging techniques offer panchromatic images in basic, stereo pair, triplet, and mosaic types.

Source: ImageSat International N.V. 2008

FORMOSAT-2 The FORMOSAT-2 satellite, launched in May 2004, was the first remote sensing satellite developed by National Space Organization (NSPO). The main mission of FORMOSAT-2 is to capture satellite images of Taiwan island and surrounding oceanic regions, and terrestrial and oceanic regions of the entire Earth.

Band 30.76 - 0.86

Band 72.235 - 2.285

Band 1310.25 - 10.95

Band 82.295 - 2.365

Band 1410.95 - 11.65

Band 92.360 - 2.430

Ground Resolu-tion

15 m 30 m 90 m

Swath Width 60 km 60 km 60 km

Table 7: ASTER Characteristics

Characterisic VNIR SWIR TIR

Table 8: EROS A - EROS B Characteristics

Characteristic EROS A EROS B

Geometry of orbit sun-synchronous sun-synchronous

Orbit Altitude ~ 500 km ~ 500 km

Swath Width 14 km at nadir 7 km at nadir

Ground Sampling Distance

1.9 m at nadir from 510 km

0.7 m at nadir from 510 km for TDI stages 1,4,8

0.8 m at nadir from 510 km for all other TDI stages

Spectral Bandwidth 0.5 to 0.9 0.5 to 0.9

Page 102: ERDAS Field Guide

72 Raster and Vector Data Sources

FORMOSAT-2 onboard sensors include a Remote Sensing Instrument and ISUAL (Imager of Sprites and Upper Atmospheric Lightning).

Source: National Space Organization, 2008 and European Space Agency, 2010b.

GeoEye-1 The GeoEye-1 satellite, launched in 2008, was developed by GeoEye, a company formed through the combination of ORBIMAGE and Space Imaging. GeoEye-1 orbits at an altitude of 681 km, or 423 miles in a sun-synchronous orbit type.GeoEye-1 data collection capacity is up to 700,000 square km per day of pan area and up to 350,000 square km per day of pan-sharpened multispectral area.

Table 9: FORMOSAT-2 Characteristics

Geometry of orbit sun-synchronous

Orbit Altitude 891 km

Swath Width 24 km

Sensor Resolution panchromatic - 2 m

multispectral - 8 m

Table 10: GeoEye-1 Characteristics

Geometry of orbit sun-synchronous

Orbit Altitude 681 km

Orbit Inclination 98 degrees

Swath Width 15.2 km at nadir

Area Size - single point 225 sq km (15 km x 15 km)

Area Size - large area 15,000 sq km (300 km x 50 km)

Area Size - cell size 10,000 sq km (100 km x 100 km)

Area Size - stereo area 6,270 sq km (224 km x 28 km)

Sensor Resolution

nominal at Nadir

panchromatic - 0.41 m (1.34 feet)

multispectral - 1.65 m (5.41 feet)

Spectral Bandwidth

Panchromatic

459 to 800 nm

Page 103: ERDAS Field Guide

Raster and Vector Data Sources 73

Source: GeoEye, 2008.

IKONOS The IKONOS satellite was launched in September 1999. The resolution of the panchromatic sensor is 1 m. The resolution of the multispectral scanner is 4 m. The swath width is 13 km at nadir. The accuracy with out ground control is 12 m horizontally, and 10 m vertically; with ground control it is 2 m horizontally, and 3 m vertically. IKONOS orbits at an altitude of 423 miles, or 681 kilometers. The revisit time is 2.9 days at 1 m resolution, and 1.5 days at 1.5 m resolution.

Source: Space Imaging, 1999a; Center for Health Applications of Aerospace Related Technologies, 2000a

IRS

IRS-1CThe IRS-1C satellite was launched in December of 1995.The repeat coverage of IRS-1C is every 24 days. The sensor has a 744 km swath width.The IRS-1C satellite has three sensors on board with which to capture images of the Earth. Those sensors are as follows:

LISS-III

LISS-III has a spatial resolution of 23 m, with the exception of the SW Infrared band, which is 70 m. Bands 2, 3, and 4 have a swath width of 142 kilometers; band 5 has a swath width of 148 km. Repeat coverage occurs every 24 days at the Equator.

Spectral Bandwidth

Multispectral

450 - 510 nm (blue)

510 - 580 nm (green)

655 - 690 nm (red)

780 - 920 (near infrared)

Table 10: GeoEye-1 Characteristics

Band Wavelength (microns)

1, Blue 0.45 to 0.52 μm

2, Green 0.52 to 0.60 μm

3, Red 0.63 to 0.69 μm

4, NIR 0.76 to 0.90 μm

Panchromatic 0.45 to 0.90 μm

Page 104: ERDAS Field Guide

74 Raster and Vector Data Sources

Source: National Remote Sensing Agency, 1998

Panchromatic Sensor

The panchromatic sensor has 5.8 m spatial resolution, as well as stereo capability. Its swath width is 70 m. Repeat coverage is every 24 days at the Equator. The revisit time is every five days, with ± 26° off-nadir viewing.

Wide Field Sensor (WiFS)

WiFS has a 188 m spatial resolution, and repeat coverage every five days at the Equator. The swath width is 774 km.

Source: Space Imaging, 1999b; Center for Health Applications of Aerospace Related Technologies, 1998

IRS-1DIRS-1D was launched in September of 1997. It collects imagery at a spatial resolution of 5.8 m. IRS-1D’s sensors were copied for IRS-1C, which was launched in December 1995.

Band Wavelength (microns)

1, Blue ---

2, Green 0.52 to 0.59 μm

3, Red 0.62 to 0.68 μm

4, NIR 0.77 to 0.86 μm

5, SW IR 1.55 to 1.70 μm

Band Wavelength (microns)

Pan 0.5 to 0.75 μm

Band Wavelength (microns)

1, Red 0.62 to 0.68 μm

2, NIR 0.77 to 0.86 μm

3, MIR 1.55 to 1.75 μm

Page 105: ERDAS Field Guide

Raster and Vector Data Sources 75

Imagery collected by IRS-1D is distributed in black and white format. The panchromatic imagery “reveals objects on the Earth’s surface (such) as transportation networks, large ships, parks and opens space, and built-up urban areas” (Space Imaging, 1999b). This information can be used to classify land cover in applications such as urban planning and agriculture. The Space Imaging facility located in Norman, Oklahoma has been obtaining IRS-1D data since 1997.

For band and wavelength data on IRS-1D, see IRS on page 73.

Source: Space Imaging, 1998

KOMPSAT 1-2 Korea Aerospace Research Institute (KARI) has developed the KOMPSAT-1 (KOrea Multi-Purpose SATellite) and KOMPSAT-2 satellite systems for surveillance of large scale disasters, acquisition of high resolution images for GIS, and composition of printed and digitized maps. KOMPSAT-1, launched in December 1999, carries an Electro-Optical Camera (EOC) sensor and KOMPSAT-2, launched in July 2006, carries a Multi-Spectral Camera (MSC) sensor.Through a third party mission agreement, European Space Agency makes a sample dataset of European cities available from these missions.

Source: European Space Agency, 2010c

Table 11: KOMPSAT-1 and KOMPSAT-2 Characteristics

Characteristic KOMPSAT-1 KOMPSAT-2

Geometry of orbit sun-synchronous circular polar

sun-synchronous circular

Orbit Altitude 685 km 685 km

Swath Width 24 km EOC ~ 15 km

Resolution 6 m EOC 1 m panchromatic

4 m multispectral

Spectral Bandwidth 500 - 900 nm panchromatic

450 - 900 nm multispectral (4 bands)

Page 106: ERDAS Field Guide

76 Raster and Vector Data Sources

Landsat 1-5 In 1972, the National Aeronautics and Space Administration (NASA) initiated the first civilian program specializing in the acquisition of remotely sensed digital satellite data. The first system was called ERTS (Earth Resources Technology Satellites), and later renamed to Landsat. There have been several Landsat satellites launched since 1972. Landsats 1, 2, 3 and 4 are no longer operating, but Landsat 5 is still in orbit gathering data.Landsats 1, 2, and 3 gathered Multispectral Scanner (MSS) data and Landsats 4 and 5 collected MSS and TM data. MSS and TM are discussed in more detail in the following sections.

NOTE: Landsat data are available through the EROS Data Center. See Ordering Raster Data on page 127 for more information.

MSSThe Multispectral Scanner from Landsats 4 and 5 has a swath width of approximately 185 × 170 km from a height of approximately 900 km for Landsats 1, 2, and 3, and 705 km for Landsats 4 and 5. MSS data are widely used for general geologic studies as well as vegetation inventories. The spatial resolution of MSS data is 56 × 79 m, with a 79 × 79 m IFOV. A typical scene contains approximately 2340 rows and 3240 columns. The radiometric resolution is 6-bit, but it is stored as 8-bit (Lillesand and Kiefer, 1987).Detectors record electromagnetic radiation (EMR) in four bands:

• Bands 1 and 2 are in the visible portion of the spectrum and are useful in detecting cultural features, such as roads. These bands also show detail in water.

• Bands 3 and 4 are in the near-infrared portion of the spectrum and can be used in land/water and vegetation discrimination.

Band Wavelength (microns) Comments

1, Green

0.50 to 0.60 μm

This band scans the region between the blue and red chlorophyll absorption bands. It corresponds to the green reflectance of healthy vegetation, and it is also useful for mapping water bodies.

2, Red 0.60 to 0.70 μm

This is the red chlorophyll absorption band of healthy green vegetation and represents one of the most important bands for vegetation discrimination. It is also useful for determining soil boundary and geological boundary delineations and cultural features.

Page 107: ERDAS Field Guide

Raster and Vector Data Sources 77

Source: Center for Health Applications of Aerospace Related Technologies, 2000b

TMThe TM scanner is a multispectral scanning system much like the MSS, except that the TM sensor records reflected/emitted electromagnetic energy from the visible, reflective-infrared, middle-infrared, and thermal-infrared regions of the spectrum. TM has higher spatial, spectral, and radiometric resolution than MSS. TM has a swath width of approximately 185 km from a height of approximately 705 km. It is useful for vegetation type and health determination, soil moisture, snow and cloud differentiation, rock type discrimination, and so forth. The spatial resolution of TM is 28.5 × 28.5 m for all bands except the thermal (band 6), which has a spatial resolution of 120 × 120 m. The larger pixel size of this band is necessary for adequate signal strength. However, the thermal band is resampled to 28.5 × 28.5 m to match the other bands. The radiometric resolution is 8-bit, meaning that each pixel has a possible range of data values from 0 to 255. Detectors record EMR in seven bands:

• Bands 1, 2, and 3 are in the visible portion of the spectrum and are useful in detecting cultural features such as roads. These bands also show detail in water.

• Bands 4, 5, and 7 are in the reflective-infrared portion of the spectrum and can be used in land/water discrimination.

• Band 6 is in the thermal portion of the spectrum and is used for thermal mapping (Jensen, 1996; Lillesand and Kiefer, 1987).

3, Red, NIR

0.70 to 0.80 μm

This band is especially responsive to the amount of vegetation biomass present in a scene. It is useful for crop identification and emphasizes soil/crop and land/water contrasts.

4, NIR 0.80 to 1.10 μm

This band is useful for vegetation surveys and for penetrating haze (Jensen, 1996).

Band Wavelength (microns) Comments

Page 108: ERDAS Field Guide

78 Raster and Vector Data Sources

Source: Center for Health Applications of Aerospace Related Technologies, 2000b

Band Wavelength (microns) Comments

1, Blue 0.45 to 0.52 μm

This band is useful for mapping coastal water areas, differentiating between soil and vegetation, forest type mapping, and detecting cultural features.

2, Green

0.52 to 0.60 μm

This band corresponds to the green reflectance of healthy vegetation. Also useful for cultural feature identification.

3, Red 0.63 to 0.69 μm

This band is useful for discriminating between many plant species. It is also useful for determining soil boundary and geological boundary delineations as well as cultural features.

4, NIR 0.76 to 0.90 μm

This band is especially responsive to the amount of vegetation biomass present in a scene. It is useful for crop identification and emphasizes soil/crop and land/water contrasts.

5, MIR 1.55 to 1.75 μm

This band is sensitive to the amount of water in plants, which is useful in crop drought studies and in plant health analyses. This is also one of the few bands that can be used to discriminate between clouds, snow, and ice.

6, TIR 10.40 to 12.50 μm

This band is useful for vegetation and crop stress detection, heat intensity, insecticide applications, and for locating thermal pollution. It can also be used to locate geothermal activity.

7, MIR 2.08 to 2.35 μm

This band is important for the discrimination of geologic rock type and soil boundaries, as well as soil and vegetation moisture content.

Page 109: ERDAS Field Guide

Raster and Vector Data Sources 79

Figure 24: Landsat MSS vs. Landsat TM

Band Combinations for Displaying TM Data Different combinations of the TM bands can be displayed to create different composite effects. The following combinations are commonly used to display images:

NOTE: The order of the bands corresponds to the Red, Green, and Blue (RGB) color guns of the monitor.

• Bands 3, 2, 1 create a true color composite. True color means that objects look as they would to the naked eye—similar to a color photograph.

• Bands 4, 3, 2 create a false color composite. False color composites appear similar to an infrared photograph where objects do not have the same colors or contrasts as they would naturally. For instance, in an infrared image, vegetation appears red, water appears navy or black, and so forth.

• Bands 5, 4, 2 create a pseudo color composite. (A thematic image is also a pseudo color image.) In pseudo color, the colors do not reflect the features in natural colors. For instance, roads may be red, water yellow, and vegetation blue.

Different color schemes can be used to bring out or enhance the features under study. These are by no means all of the useful combinations of these seven bands. The bands to be used are determined by the particular application.

4 bands

7 bands

1 pixel=30x30m

1 pixel=57x79m

MSS

TMradiometricresolution

0-127

radiometricresolution

0-255

Page 110: ERDAS Field Guide

80 Raster and Vector Data Sources

See "Image Display" on page 145 for more information on how images are displayed, "Enhancement" on page 455 for more information on how images can be enhanced, and Ordering Raster Data on page 127 for information on types of Landsat data available.

Landsat 7 The Landsat 7 satellite, launched in 1999, uses Enhanced Thematic Mapper Plus (ETM+) to observe the Earth. The capabilities new to Landsat 7 include the following:

• 15m spatial resolution panchromatic band

• 5% radiometric calibration with full aperture

• 60m spatial resolution thermal IR channel

The primary receiving station for Landsat 7 data is located in Sioux Falls, South Dakota at the USGS EROS Data Center (EDC). ETM+ data is transmitted using X-band direct downlink at a rate of 150 Mbps. Landsat 7 is capable of capturing scenes without cloud obstruction, and the receiving stations can obtain this data in real time using the X-band. Stations located around the globe, however, are only able to receive data for the portion of the ETM+ ground track where the satellite can be seen by the receiving station.

Landsat 7 Data TypesOne type of data available from Landsat 7 is browse data. Browse data is “a lower resolution image for determining image location, quality and information content.” The other type of data is metadata, which is “descriptive information on the image.” This information is available via the internet within 24 hours of being received by the primary ground station. Moreover, EDC processes the data to Level 0r. This data has been corrected for scan direction and band alignment errors only. Level 1G data, which is corrected, is also available.

Landsat 7 SpecificationsInformation about the spectral range and ground resolution of the bands of the Landsat 7 satellite is provided in the following table:

Band Number Wavelength (microns) Resolution (m)

1 0.45 to 0.52 μm 30

2 0.52 to 0.60 μm 30

Page 111: ERDAS Field Guide

Raster and Vector Data Sources 81

Landsat 7 has a swath width of 185 kilometers. The repeat coverage interval is 16 days, or 233 orbits. The satellite orbits the Earth at 705 kilometers.Source: National Aeronautics and Space Administration, 1998; National Aeronautics and Space Administration, 2001

LPGS and NLAPS Processing Systems

There are two processing systems used to generate Landsat MSS, TM, and ETM+ data products. The products generated by LPGS and NLAPS are mostly similar, but there are considerable differences.The Level 1 Product Generation System (LPGS) is for Landsat 7 ETM+ and Landsat 5 TM data. The levels of processing are:

• Level 1G (radiometrically and geometrically corrected - MSS, TM and ETM+)

• Level 1P (systematically terrain corrected - TM and MSS only)

• Level 1Gt (systematically terrain corrected - TM and ETM+ only)

• Level 1T (terrain corrected - MSS, TM and ETM+)

There are geometric differences, radiometric differences, and data format differences between the LPGS and NLAPS processing systems. Details of the differences are listed on the United States Geological Survey - Landsat Missions web site.Source: United States Geological Survey (USGS) 2008.The National Landsat Archive Production System (NLAPS) is the Landsat processing system used for Landsat 1-5 MSS and Landsat 4 TM data.The NLAPS system is able to “produce systematically-corrected, and terrain corrected products. . .” (United States Geological Survey, n.d.).

3 0.63 to 0.69 μm 30

4 0.76 to 0.90 μm 30

5 1.55 to 1.75 μm 30

6 10.4 to 12.5 μm 60

7 2.08 to 2.35 μm 30

Panchromatic (8) 0.50 to 0.90 μm 15

Band Number Wavelength (microns) Resolution (m)

Page 112: ERDAS Field Guide

82 Raster and Vector Data Sources

Landsat data received from satellites is generated into TM corrected data using the NLAPS by:

• correcting and validating the mirror scan and payload correction data

• providing for image framing by generating a series of scene center parameters

• synchronizing telemetry data with video data

• estimating linear motion deviation of scan mirror/scan line corrections

• generating benchmark correction matrices for specified map projections

• producing along- and across-scan high-frequency line matrices

According to the USGS, the products provided by NLAPS include the following:

• image data and the metadata describing the image

• processing procedure, which contains information describing the process by which the image data were produced

• DEM data and the metadata describing them (available only with terrain corrected products)

Source: United States Geological Survey, n.d.

NOAA Polar Orbiter Data NOAA has sponsored several polar orbiting satellites to collect data of the Earth. These satellites were originally designed for meteorological applications, but the data gathered have been used in many fields—from agronomy to oceanography (Needham, 1986). The first of these satellites to be launched was the TIROS-N in 1978. Since the TIROS-N, many additional NOAA satellites have been launched and some continue to gather data.

AVHRRThe Advanced Very High Resolution Radiometer (AVHRR) is an optical multispectral scanner flown aboard National Oceanic and Atmospheric Administration (NOAA) orbiting satellites.The AVHRR sensor provides pole to pole on-board collection of data. The swath width is 2399 km (1491 miles) and the satellites orbit the Earth 14 times each day at an altitude of 833 km (517 miles).Source: United States Geological Survey, 2006a.

Page 113: ERDAS Field Guide

Raster and Vector Data Sources 83

The AVHRR system allows for direct transmission in real-time of data called High Resolution Picture Transmission (HRPT). It also allows for about ten minutes of data to be recorded over any portion of the world on two recorders on board the satellite. These recorded data are called Local Area Coverage (LAC). LAC and HRPT have identical formats; the only difference is that HRPT are transmitted directly and LAC are recorded. The basic formats for AVHRR data which can be imported into ERDAS IMAGINE are:

• LAC—(Local Area Coverage) data recorded on board the sensor with a spatial resolution of approximately 1.1 × 1.1 km

• HRPT—(High Resolution Picture Transmission) direct transmission of AVHRR data in real-time with the same resolution as LAC

• GAC—(Global Area Coverage) data produced from LAC data by using only 1 out of every 3 scan lines. GAC data have a spatial resolution of approximately 4 × 4 km

AVHRR data are available in 10-bit packed and 16-bit unpacked format. The term packed refers to the way in which the data are written to the tape. Packed data are compressed to fit more data on each tape (Kidwell, 1988). The USGS also provides a series of derived AVHRR Normalized Difference Vegetation Index (NDVI) Composites and Global Land Cover Characterization (GLCC) data.The AVHRR data collection effort provides cloud mapping, land-water boundaries, snow and ice detection, temperatures of radiating surfaces and sea surface temperatures. This data is also useful for vegetation studies, land cover mapping, country maps, continental maps, world maps, and snow cover evalution.

Table 12: AVHRR Data Characteristics

BandWavelength (microns)

NOAA 6,8,10

Wavelength (microns)

NOAA 7,9,11,12, 14

Wavelength (microns)

NOAA 15,16,17

Primary Uses

1 0.58 - 0.68 0.58 - 0.68 0.58 - 0.68 Daytime cloud/surface and vegetation mapping

2 0.725 - 1.10 0.725 - 1.10 0.725 - 1.10 Surface water, ice, snow melt, and vegetation mapping

3A 1.58 - 1.64 Snow and ice detection

Page 114: ERDAS Field Guide

84 Raster and Vector Data Sources

Source: United States Geological Survey, 2006a.AVHRR data have a radiometric resolution of 10-bits, meaning that each pixel has a possible data file value between 0 and 1023. AVHRR scenes may contain one band, a combination of bands, or all bands. All bands are referred to as a full set, and selected bands are referred to as an extract.

See Ordering Raster Data on page 127 for information on the types of NOAA data available.

Use the Import/Export function to import AVHRR data.

OrbView-3 OrbView-3 was built for Orbital Imaging Corporation (now GeoEye) and was designed to provide high-resolution imagery.The OrbView-3 mission began in 2003 with the satellite’s launch and the mission is complete. The OrbView-3 satellite provided both 1 meter panchromatic imagery and 4 meter multispectral imagery of the entire Earth. The satellite orbit was 470 km inclined at 97 degrees/470 km and sun-synchronous, with a swath width of 8 km.Source: Orbital Sciences Corporation, 2008.Orbital Imaging Corporation plans were for “One-meter imagery will enable the viewing of houses, automobiles and aircraft, and will make it possible to create highly precise digital maps and three-dimensional fly-through scenes. Four-meter multispectral imagery will provide color and infrared information to further characterize cities, rural areas and undeveloped land from space” (ORBIMAGE, 1999). Specific applications include telecommunications and utilities, agriculture and forestry.

3B 3.55 - 3.93 3.55 - 3.93 3.55 - 3.93 Sea surface temperature, night-time cloud mapping

4 10.50 - 11.50 10.3 - 11.3 10.3 - 11.3 Sea surface temperature, day and night cloud mapping

5 Band 4 repeated 11.5 - 12.5 11.5 - 12.5 Sea surface temperature, day and night cloud mapping

Table 12: AVHRR Data Characteristics

BandWavelength (microns)

NOAA 6,8,10

Wavelength (microns)

NOAA 7,9,11,12, 14

Wavelength (microns)

NOAA 15,16,17

Primary Uses

Page 115: ERDAS Field Guide

Raster and Vector Data Sources 85

Source: ORBIMAGE, 1999; ORBIMAGE, 2000

QuickBird The QuickBird satellite was launched in 2001 by DigitalGlobe offering imagery for map publishing, land and asset management, change detection and insurance risk assessment.QuickBird produces sub-meter resolution panchromatic and multispectral imagery. The data collection nominal swath width is 16.5 km at nadir, and areas of interest sizes are 16.5 km x 16.5 km for a single area and 16.5 km x 115 km for a strip.

Bands Spectral Range

1 450 to 520 nm

2 520 to 600 nm

3 625 to 695 nm

4 760 to 900 nm

Panchromatic 450 to 900 nm

Table 13: QuickBird Characteristics

Geometry of orbit sun-synchronous

Orbit Altitude 450 km

Orbit Inclination 98 degrees

Swath Width normal - 16.5 km at nadir

accessible ground - 544 km centered on the satellite ground track

Sensor Resolution

ground sample distance at nadir

panchromatic - 61 cm (2 feet)

multispectral - 2.4 m (8 feet)

Spectral Bandwidth

Panchromatic

445 to 900 nm

Spectral Bandwidth

Multispectral

450 - 520 nm (blue)

520 - 600 nm (green)

630 - 690 nm (red)

760 - 900 (near infrared)

Page 116: ERDAS Field Guide

86 Raster and Vector Data Sources

Source: DigitalGlobe, 2008a.

RapidEye The German company RapidEye AG launched a constellation of five satellite sensors in 2008. All five satellites contain equivalent sensors, are calibrated equally to one another, and are located in the same orbital plane. This allows RapidEye to deliver multi-temporal data sets in high resolution in near real-time.The RapidEye satellite system collects imagery in five spectral bands, and is the first commercial system to offer the Red-Edge band, which measures variances in vegetation, allowing for species separation and monitoring vegetation health.RapidEye standard image products are offered at three processing levels:

• RapidEye Basic (Level 1B) -- geometrically uncorrected, radiometric and sensor corrected

• RapidEye Geo-corrected (Level 2A) -- geo-corrected with radiometric and geometric corrections and aligned to a map projection

• RapidEye Ortho (Level 3A) -- orthorectified with radiometric, geometric, and terrain corrections and aligned to a map projection

Table 14: RapidEye Characteristics

Number of Satellites 5

Orbit Altitude 630 km in sun-synchronous orbit

Equator Crossing Time 11:00 am (approximately)

Sensor Type Multi-spectral push broom imager

Spectral Bands 440 - 510 nm (Blue)

520 - 590 nm (Green)

630 - 685 nm (Red)

690 - 730 nm (Red Edge)

760 - 850 nm (Near IR)

Ground Sampling Distance (nadir) 6.5 m

Pixel Size (orthorectified) 5 m

Swath Width 77 km

Revisit Time Daily (off-nadir) / 5.5 days (at nadir)

Image Capture Capacity 4 million sq km per day

Page 117: ERDAS Field Guide

Raster and Vector Data Sources 87

Source: RapidEye AG, 2008 and RapidEye AG, 2009.

SeaWiFS The Sea-viewing Wide Field-of-View Sensor (SeaWiFS) instrument is on-board the SeaStar spacecraft, which was launched in 1997. The SeaStar spacecraft’s orbit is circular, at an altitude of 705 km.The satellite uses an attitude control system (ACS), which maintains orbit, as well as performs solar and lunar calibration maneuvers. The ACS also provides attitude information within one SeaWiFS pixel.The SeaWiFS instrument is made up of an optical scanner and an electronics module. The swath width is 2,801 km LAC/HRPT (958.3 degrees) and 1,502 km GAC (45 degrees). The spatial resolution is 1.1 km LAC and 4.5 km GAC. The revisit time is one day.

Source: National Aeronautics and Space Administration, 1999; Center for Health Applications of Aerospace Related Technologies, 1998.

SPOT 1 -3 SPOT 1 satellite was developed by the French Centre National d’Etudes Spatiales (CNES) and launched in early 1986. SPOT 2 satellite, launched in 1990, was the first in the series to carry the DORIS precision positioning instrument. SPOT 3, launched in 1993, also carried the DORIS instrument, plus the American passenger payload POAM II, used to measure atmospheric ozone at the poles. SPOT 3 was decommissioned in 1996. (Spot series, 2006).

Dynamic Range 12 bit

Table 14: RapidEye Characteristics

Band Wavelength (nanometers)

1, Blue 402 to 422 nm

2, Blue 433 to 453 nm

3, Cyan 480 to 500 nm

4, Green 500 to 520 nm

5, Green 545 to 565 nm

6, Red 660 to 680 nm

7, NIR 745 to 785 nm

8, NIR 845 to 885 nm

Page 118: ERDAS Field Guide

88 Raster and Vector Data Sources

The sensors operate in two modes, multispectral and panchromatic. SPOT is commonly referred to as a pushbroom scanner meaning that all scanning parts are fixed, and scanning is accomplished by the forward motion of the scanner. SPOT pushes 3000/6000 sensors along its orbit. This is different from Landsat which scans with 16 detectors perpendicular to its orbit. The SPOT satellite can observe the same area on the globe once every 26 days. The SPOT scanner normally produces nadir views, but it does have off-nadir viewing capability. Off-nadir refers to any point that is not directly beneath the detectors, but off to an angle. Using this off-nadir capability, one area on the Earth can be viewed as often as every 3 days.This off-nadir viewing can be programmed from the ground control station, and is quite useful for collecting data in a region not directly in the path of the scanner or in the event of a natural or man-made disaster, where timeliness of data acquisition is crucial. It is also very useful in collecting stereo data from which elevation data can be extracted.The width of the swath observed varies between 60 km for nadir viewing and 80 km for off-nadir viewing at a height of 832 km (Jensen, 1996).

PanchromaticSPOT Panchromatic (meaning sensitive to all visible colors) has 10 × 10 m spatial resolution, contains 1 band—0.51 to 0.73 μm—and is similar to a black and white photograph. It has a radiometric resolution of 8 bits (Jensen, 1996).

XSSPOT XS, or multispectral, has 20 × 20 m spatial resolution, 8-bit radiometric resolution, and contains 3 bands (Jensen, 1996).

Band Wavelength (microns) Comments

1, Green 0.50 to 0.59 μm This band corresponds to the green reflectance of healthy vegetation.

2, Red 0.61 to 0.68 μm This band is useful for discriminating between plant species. It is also useful for soil boundary and geological boundary delineations.

3, Reflec-tive IR

0.79 to 0.89 μm This band is especially responsive to the amount of vegetation biomass present in a scene. It is useful for crop identification and emphasizes soil/crop and land/water contrasts.

Page 119: ERDAS Field Guide

Raster and Vector Data Sources 89

Figure 25: SPOT Panchromatic vs. SPOT XS

See Ordering Raster Data on page 127 for information on the types of SPOT data available.

Stereoscopic Pairs Two observations can be made by the panchromatic scanner on successive days, so that the two images are acquired at angles on either side of the vertical, resulting in stereoscopic imagery. Stereoscopic imagery can also be achieved by using one vertical scene and one off-nadir scene. This type of imagery can be used to produce a single image, or topographic and planimetric maps (Jensen, 1996). Topographic maps indicate elevation. Planimetric maps correctly represent horizontal distances between objects (Star and Estes, 1990).

See Topographic Data on page 121 and "Terrain Analysis" on page 645 for more information about topographic data and how SPOT stereopairs and aerial photographs can be used to create elevation data and orthographic images.

SPOT 4 The SPOT 4 satellite was launched in 1998. SPOT 4 carries High Resolution Visible Infrared (HR VIR) instruments that obtain information in the visible and near-infrared spectral bands.

SPOT Panchromatic vs XS

1 band

3 bands

1 pixel=20x20m

1 pixel=10x10m

Panchromatic

XS

radiometricresolution

0-255

Page 120: ERDAS Field Guide

90 Raster and Vector Data Sources

The SPOT 4 satellite orbits the Earth at 822 km at the Equator. The SPOT 4 satellite has two sensors on board: a multispectral sensor, and a panchromatic sensor. The multispectral scanner has a pixel size of 20 × 20 m, and a swath width of 60 km. The panchromatic scanner has a pixel size of 10 × 10 m, and a swath width of 60 km.

Source: SPOT Image, 1998; SPOT Image, 1999; Center for Health Applications of Aerospace Related Technologies, 2000c.

SPOT 5 The SPOT 5 satellite, launched in 2002, carries two new HRVIR viewing instruments which have a better resolution: 2.5 to 5 meters in panchromatic and infrared mode and 10 meters in multispectral mode.SPOT 5 carries an HRS (High Resolution Stereoscopic) imaging instrument operating in panchromatic mode with multiple cameras. The forward-pointing camera acquires images of the ground, then the rearward-pointing camera covers the same strip 90 seconds later. Thus HRS is able to acquire stereopair images almost simultaneously to map relief, produce DEMs, and generate orthorectified products. SPOT 5 also carries VEGETATION 2 instrument, which offers a spatial resolution of one kilometer and a wide imaging swath. This instrument is identical to the VEGETATION 2 instrument on SPOT 4. Source: Spot series, 2006.

WorldView-1 The WorldView-1 satellite was launched in 2007 by DigitalGlobe offering imagery for map creation, change detection and in-depth image analysis.WorldView-1 produces half-meter resolution panchromatic imagery. The satellite has an average revisit time of 1.7 days and is capable of collecting up to 750,000 square kilometers (290,000 square miles) per day of half-meter imagery. The data collection options include:

• Long strip - 17.6 km x up to 330 km

• Large area - 60 km x 110 km

• Multiple point targets - up to 17.6 km

Band Wavelength

1, Green 0.50 to 0.59 μm

2, Red 0.61 to 0.68 μm

3, (near-IR) 0.78 to 0.89 μm

4, (mid-IR) 1.58 to 1.75 μm

Panchromatic 0.61 to 0.68 μm

Page 121: ERDAS Field Guide

Raster and Vector Data Sources 91

• Stereo area - 30 km x 110 km

Source: DigitalGlobe, 2008b.

WorldView-2 Owned and operated by DigitalGlobe, WorldView-2 was launched in 2009 to provide highly detailed imagery for precise vector and terrain data creation, pan-sharpened imagery, change detection, and in-depth remote sensing image analysis. WorldView-2 is a panchromatic imaging system featuring half-meter resolution imagery, combined with a multispectral capability featuring two meter resolution imagery.WorldView-2 multispectral capability provides 8 spectral bands, including 4 new colors: coastal blue, yellow, red edge, and near IR2.

• Coastal blue is useful for bathymetric studies.

• Yellow detects the “yellowness” of vegetation on land and in water.

• Red edge measures plant health and is useful for vegetation classification.

• Near Infrared 2 overlaps the Near IR1 band but is less affected by atmospheric influence and enables broader vegetation analysis.

The WorldView-2 collection scenarios are: long strip, large area collect, multiple point targets, and stereo area collect.

Table 15: WorldView-1 Characteristics

Geometry of orbit sun-synchronous

Orbit Altitude 496 km

Swath Width 17.6 km at nadir

Sensor Resolution

GSD = ground sample distance

0.50 meters GSD at nadir

0.59 meters GSD at 25° off-nadir

Spectral Bandwidth Panchromatic

Page 122: ERDAS Field Guide

92 Raster and Vector Data Sources

Source: DigitalGlobe, 2010 and Padwick, et al, 2010.

Radar Satellite Data

Simply put, radar data are produced when:

• a radar transmitter emits a beam of micro or millimeter waves,

• the waves reflect from the surfaces they strike, and

• the backscattered radiation is detected by the radar system’s receiving antenna, which is tuned to the frequency of the transmitted waves.

The resultant radar data can be used to produce radar images.

There are many sensor-specific importers and direct read capabilities within ERDAS IMAGINE for most types of radar data. The SAR Metadata Editor in the IMAGINE Radar Mapping Suite can be used to attach SAR image metadata to SAR images including creating or editing the radar ephemeris.

Table 16: WorldView-2 Characteristics

Sensor Bands Pan: 450 - 800 nm Multispectral (nm):

Coastal: 400 - 450

Blue: 450 - 510

Green: 510 - 580

Yellow: 585 - 625

Red: 630 - 690

Red Edge: 705 - 745

Near IR1: 770 - 895

Near IR2: 860 - 1040

Sensor Resolution

GSD = ground sample distance

Pan: 0.46 meters GSD at nadir0.52 meters GSD at 20° off-nadir

Multi: 1.84 meters GSD at nadir2.08 meters GSD at 20° off-nadir

Swath Width 16.4 km at nadir

Page 123: ERDAS Field Guide

Raster and Vector Data Sources 93

A radar system can be airborne, spaceborne, or ground-based. Airborne radar systems have typically been mounted on civilian and military aircraft, but in 1978, the radar satellite Seasat-1 was launched. The radar data from that mission and subsequent spaceborne radar systems have been a valuable addition to the data available for use in GIS processing. Researchers are finding that a combination of the characteristics of radar data and visible/infrared data is providing a more complete picture of the Earth. In the last decade, the importance and applications of radar have grown rapidly.

Advantages of Using Radar Data

Radar data have several advantages over other types of remotely sensed imagery:

• Radar microwaves can penetrate the atmosphere day or night under virtually all weather conditions, providing data even in the presence of haze, light rain, snow, clouds, or smoke.

• Under certain circumstances, radar can partially penetrate arid and hyperarid surfaces, revealing subsurface features of the Earth.

• Although radar does not penetrate standing water, it can reflect the surface action of oceans, lakes, and other bodies of water. Surface eddies, swells, and waves are greatly affected by the bottom features of the water body, and a careful study of surface action can provide accurate details about the bottom features.

Radar Sensor Types Radar images are generated by two different types of sensors:

• SLAR (Side-looking Airborne Radar)—uses an antenna which is fixed below an aircraft and pointed to the side to transmit and receive the radar signal. (See Figure 26.)

• SAR—uses a side-looking, fixed antenna to create a synthetic aperture. SAR sensors are mounted on satellites and the NASA Space Shuttle. The sensor transmits and receives as it is moving. The signals received over a time interval are combined to create the image.

Both SLAR and SAR systems use side-looking geometry. Figure 26 shows a representation of an airborne SLAR system.

Page 124: ERDAS Field Guide

94 Raster and Vector Data Sources

Figure 26: SLAR Radar

Source: Lillesand and Kiefer, 1987Figure 27 shows a graph of the data received from the radiation transmitted in Figure 26. Notice how the data correspond to the terrain in Figure 26. These data can be used to produce a radar image of the target area. A target is any object or feature that is the subject of the radar scan.

Figure 27: Received Radar Signal

Active and Passive SensorsAn active radar sensor gives off a burst of coherent radiation that reflects from the target, unlike a passive microwave sensor which simply receives the low-level radiation naturally emitted by targets.

RangeDirection

AzimuthDirection

Sensor Heightat Nadir

AzimuthResolutionPrevious

ImageLines

BeamWidth

Tree

s

Valle

y

Hill

Hill

Sha

dowTr

ees

Time

Stre

ngth

(DN

)

Page 125: ERDAS Field Guide

Raster and Vector Data Sources 95

Like the coherent light from a laser, the waves emitted by active sensors travel in phase and interact minimally on their way to the target area. After interaction with the target area, these waves are no longer in phase. This is due to the different distances they travel from different targets, or single versus multiple bounce scattering.

Figure 28: Radar Reflection from Different Sources and Distances

Source: Lillesand and Kiefer, 1987.Currently, these bands are commonly used for radar imaging systems:

More information about these radar systems is given later in this chapter.

Diffuse reflector Specularreflector

Cornerreflector

Radar waves are transmitted in phase.

Once reflected, they are out of phase, interfering with each other and producing speckle noise.

Table 17: Commonly Used Bands for Radar Imaging

Band Frequency Range

Wavelength Range Radar System

X 5.20-10.90 GHz 5.77-2.75 cm USGS SLAR

C 3.9-6.2 GHz 3.8-7.6 cm ERS-1, RADARSAT-1, RADARSAT-2

L 0.39-1.55 GHz 76.9-19.3 cm SIR-A,B, ALOS PALSAR

P 0.225-0.391 GHz

40.0-76.9 cm AIRSAR

Page 126: ERDAS Field Guide

96 Raster and Vector Data Sources

Radar bands were named arbitrarily when radar was first developed by the military. The letter designations have no special meaning.

NOTE: The C band overlaps the X band. Wavelength ranges may vary slightly between sensors.

Speckle Noise Once out of phase, the radar waves can interfere constructively or destructively to produce light and dark pixels known as speckle noise. Speckle noise in radar data must be reduced before the data can be utilized. However, the radar image processing programs used to reduce speckle noise also produce changes to the image. This consideration, combined with the fact that different applications and sensor outputs necessitate different speckle removal models, has lead ERDAS to offer several speckle reduction algorithms.

When processing radar data, the order in which the image processing programs are implemented is crucial. This is especially true when considering the removal of speckle noise. Since any image processing done before removal of the speckle results in the noise being incorporated into and degrading the image, do not rectify or in any way resample the pixel values before removing speckle noise. A rotation using nearest neighbor might be permissible.

The IMAGINE Radar utilities allow you to:

• import radar data into the GIS as a stand-alone source or as an additional layer with other imagery sources

• remove speckle noise

• enhance edges

• perform texture analysis

• perform radiometric correction

IMAGINE OrthoRadar™ allows you to orthorectify radar imagery.The IMAGINE InSAR™ module allows you to generate DEMs from SAR data using interferometric techniques.The IMAGINE StereoSAR DEM™ module allows you to generate DEMs from SAR data using stereoscopic techniques.

See "Enhancement" on page 455 and "Radar Concepts" on page 657 for more information on radar imagery enhancement.

Page 127: ERDAS Field Guide

Raster and Vector Data Sources 97

Applications for Radar Data

Radar data can be used independently in GIS applications or combined with other satellite data, such as Landsat, SPOT, or AVHRR. Possible GIS applications for radar data include:

• Geology—radar’s ability to partially penetrate land cover and sensitivity to micro relief makes radar data useful in geologic mapping, mineral exploration, and archaeology.

• Classification—a radar scene can be merged with visible/infrared data as an additional layer(s) in vegetation classification for timber mapping, crop monitoring, and so forth.

• Glaciology—the ability to provide imagery of ocean and ice phenomena makes radar an important tool for monitoring climatic change through polar ice variation.

• Oceanography—radar is used for wind and wave measurement, sea-state and weather forecasting, and monitoring ocean circulation, tides, and polar oceans.

• Hydrology—radar data are proving useful for measuring soil moisture content and mapping snow distribution and water content.

• Ship monitoring—the ability to provide day/night all-weather imaging, as well as detect ships and associated wakes, makes radar a tool that can be used for ship navigation through frozen ocean areas such as the Arctic or North Atlantic Passage.

• Offshore oil activities—radar data are used to provide ice updates for offshore drilling rigs, determining weather and sea conditions for drilling and installation operations, and detecting oil spills.

• Pollution monitoring—radar can detect oil on the surface of water and can be used to track the spread of an oil spill.

Radar Sensors

Almaz-1Almaz-1 was launched in 1991 and operated for 18 months before being deorbited in October 1992. Almaz-T was launched by the Soviet Union in 1987 and functioned for two years. The SAR operated with a single frequency SAR, which was attached to a spacecraft. Almaz-1 provided optically-processed data. The Almaz mission was largely kept secret.Source: Russian Space Web, 2002. Almaz-1 provided S-band information. It included a “single polarization SAR as well as a sounding radiometric scanner (RMS) system and several infrared bands” (Atlantis Scientific, Inc., 1997).

Page 128: ERDAS Field Guide

98 Raster and Vector Data Sources

The swath width of Almaz-1 was 20-45 km, the range resolution was 15-30 m, and the azimuth resolution was 15 m.Source: National Aeronautics and Space Administration, 1996; Atlantis Scientific, Inc., 1997.

ALOS PALSARPALSAR, Phased Array type L-band Synthetic Aperture Radar, is an active microwave sensor on board the ALOS satellite mission, launched in 2006. It provides fine resolution mode and ScanSAR mode, which allows a swath three to five times wider than conventional SAR images.

Source: Japan Aerospace Exploration Agency, 2003b.

COSMO-SkyMedCOSMO-SkyMed mission is a group of four satellites equipped with radar sensors for Earth observation for civil and defense use. The mission was developed by the Italian Space Agency (Agenzia Spaziale Italiana) and Telespazio. The program supplies data for emergency management services, environmental resources management, earth topographic mapping, maritime management, natural resources monitoring, surveillance, interferometric products and digital elevation models.The first two satellites were launched in 2007, the third satellite launched in 2008, and the fourth satellite is under development.The sensors operate in various wide field and narrow field modes, with multi-polarmetric and multi-temporal capabilities.

Table 18: PALSAR Sensor Characteristics

Fine Mode ScanSAR Mode

Center Frequency 1270 MHz (L-band) 1270 MHz (L-band)

Polarization HH or VVHH+HV or VV+VH

HH or VV

Range Resolution 7 to 44 m14 to 88 m

100m (multilook)

Observation Swath 40 to 70 km 250 to 350 km

Bit Length 5 bits 5 bits

Table 19: COSMO-SkyMed Imaging Characteristics

Mode Swath Resolution

ScanSARHugeregion

200 km x 200 km 100 m pixel

Page 129: ERDAS Field Guide

Raster and Vector Data Sources 99

Sources: Telespazio, 2008 and e-GEOS, 2008.

EnvisatIn 2002, the European Space Agency launched Envisat (ENVIronmental SATellite), an advanced polar-orbiting Earth observation satellite which provides measurements of the atmosphere, ocean, land, and ice. Envisat mission provides for continuity of the observations started with the ERS-1 and ERS-2 satellite missions, notably atmospheric chemistry, ocean studies and ice studies.Envisat flies in a sun-synchronous polar orbit at about 800 km altitude, with a repeat cycle of 35 days.Envisat is equipped with these instruments:

• ASAR - Advanced Synthetic Aperture Radar operating at C-band.

• MERIS - Programmable, medium-spectral resolution spectrometer operating in the 390 nm to 1040 nm spectral range.

• AATSR - Advanced Along Track Scanning Radiometer continues the collection of ATSR-1 and ATSR-2 sea surface temperature data sets.

• RA-2 - Radar Altimeter 2 determines the two-way delay of the radar echo from the Earth’s surface to a very high precision. Also measures the power and shape of reflected radar pulses.

• MWR - Microwave radiometer measures the integrated atmospheric water vapor column and cloud liquid water content, as correction terms for the radar altimeter signal.

ScanSARWideregion

100 km x 100 km 30 m pixel

StripmapHImage

40 km x 40 km 3 - 15 m pixel

StripmapPingpong

30 km x 30 km 15 m pixel

Spotlight 1 Classified Classified

Spotlight2 10 km x 10 km 1 m pixel

Table 19: COSMO-SkyMed Imaging Characteristics

Mode Swath Resolution

Page 130: ERDAS Field Guide

100 Raster and Vector Data Sources

• GOMOS - Medium resolution spectrometer measures atmospheric constituents in the spectral bands between 250 nm to 675 nm, 756 nm to 773 nm, and 926 nm to 952 nm. It includes two photometers measuring in the spectral bands between 470 nm to 520 nm and 650 nm to 700 nm.

• MIPAS - Michelson Interferometer for Passive Atmospheric Sounding is a Fourier transform spectrometer for measuring gaseous emission spectra in the near to mid infrared range.

• SCIAMACHY - Imaging spectrometer measures trace gases in the troposphere and stratosphere.

• DORIS - Doppler Orbitography and Radio-positioning Integrated by Satellite instrument is a tracking system to determine the precise location of Envisat satellite.

• LRR - Laser Retro-Reflector tracks orbit determination and range measurement calibration.

Source: European Space Agency, 2008b.

ERS-1ERS-1, a radar satellite, was launched by ESA in July of 1991. ESA, European Space Agency, announced the end of the ERS-1 mission in March 2000. The ERS-1 was ESA’s first sun-synchronous polar-orbiting mission, acquiring more than 1.5 million Synthetic Aperture Radar scenes. The measurements of sea surface temperatures made by the ERS-1 Along-Track Scanning Radiometer are the most accurate ever from space. These and other critical measurements are continued by the ERS-2 mission and Envisat.Source: European Space Agency, 2008a.One of its primary instruments was the Along-Track Scanning Radiometer (ATSR). The ATSR monitors changes in vegetation of the Earth’s surface. The instruments aboard ERS-1 include: SAR Image Mode, SAR Wave Mode, Wind Scatterometer, Radar Altimeter, and Along Track Scanning Radiometer-1 (European Space Agency, 1997).Some of the information obtained from the ERS-1 and ERS-2 missions include:

• maps of the surface of the Earth through clouds

• physical ocean features and atmospheric phenomena

• maps and ice patterns of polar regions

Page 131: ERDAS Field Guide

Raster and Vector Data Sources 101

• database information for use in modeling

• surface elevation changes

According to ESA, . . .ERS-1 provides both global and regional views of the Earth, regardless of cloud coverage and sunlight conditions. An operational near-real-time capability for data acquisition, processing and dissemination, offering global data sets within three hours of observation, has allowed the development of time-critical applications particularly in weather, marine and ice forecasting, which are of great importance for many industrial activities (European Space Agency, 1995).

Source: European Space Agency, 1995.

ERS-2ERS-2, a radar satellite, was launched by ESA in April 1995. It has an instrument called GOME, which stands for Global Ozone Monitoring Experiment. This instrument is designed to evaluate atmospheric chemistry. ERS-2, like ERS-1 makes use of the ATSR.The instruments aboard ERS-2 include: SAR Image Mode, SAR Wave Mode, Wind Scatterometer, Radar Altimeter, Along Track Scanning Radiometer-2, and the Global Ozone Monitoring Experiment.ERS-2 receiving stations are located all over the world. Facilities that process and archive ERS-2 data are also located around the globe.One of the benefits of the ERS-2 satellite is that it can provide data from the exact same type of synthetic aperture radar (SAR).ERS-2 provides many different types of information. See ERS-1 on page 100 for some of the most common types. Data obtained from ERS-2 used in conjunction with that from ERS-1 enables you to perform interferometric tasks. Using the data from the two sensors, DEMs can be created.

For information about ERDAS IMAGINE’s interferometric software, IMAGINE InSAR, see IMAGINE InSAR Theory on page 679.

Source: European Space Agency, 1995.

JERS-1JERS stands for Japanese Earth Resources Satellite. The JERS-1 satellite obtained data from 1992 to 1998, and has been superseded by the ALOS mission.

Page 132: ERDAS Field Guide

102 Raster and Vector Data Sources

See ALOS on page 68 for information about the Advanced Land Observing Satellite (ALOS).

Source: Japan Aerospace Exploration Agency, 2007. The JERS-1 satellite was launched in February of 1992, with an SAR instrument and a 4-band optical sensor aboard. The SAR sensor’s ground resolution was 18 m, and the optical sensor’s ground resolution was roughly 18 m across-track and 24 m along-track. The revisit time of the satellite was every 44 days. The satellite travelled at an altitude of 568 km, at an inclination of 97.67°.

1 Viewing 15.3° forward

Source: Earth Remote Sensing Data Analysis Center, 2000. JERS-1 data comes in two different formats: European and Worldwide. The European data format consists mainly of coverage for Europe and Antarctica. The Worldwide data format has images that were acquired from stations around the globe. According to NASA, “a reduction in transmitter power has limited the use of JERS-1 data” (National Aeronautics and Space Administration, 1996).Source: Eurimage, 1998; National Aeronautics and Space Administration, 1996.

RADARSATThe RADARSAT satellite was developed by the Canadian Space Agency and launched in 1995. With the development of RADARSAT-2, the original RADARSAT is also known as RADARSAT-1.

Table 20: JERS-1 Bands and Wavelengths

Band Wavelength

1 0.52 to 0.60 μm

2 0.63 to 0.69 μm

3 0.76 to 0.86 μm

41 0.76 to 0.86 μm

5 1.60 to 1.71 μm

6 2.01 to 2.12 μm

7 2.13 to 2.25 μm

8 2.27 to 2.40 μm

Page 133: ERDAS Field Guide

Raster and Vector Data Sources 103

The RADARSAT satellite carries SARs, which are capable of transmitting signals that can be received through clouds and during nighttime hours. RADARSAT satellite has multiple imaging modes for collecting data, which include Fine, Standard, Wide, ScanSAR Narrow, ScanSAR Wide, Extended (H), and Extended (L). The resolution and swath width varies with each one of these modes, but in general, Fine offers the best resolution: 8 m.

The types of RADARSAT image products include: Single Data, Single Look Complex, Path Image, Path Image Plus, Map Image, Precision Map Image, and Orthorectified. You can obtain this data in forms ranging from CD-ROM to print.The RADARSAT satellite uses a single frequency, C-band. The altitude of the satellite is 496 miles, or 798 km. The satellite is able to image the entire Earth, and its path is repeated every 24 days. The swath width is 500 km. Daily coverage is available of the Arctic, and any area of Canada can be obtained within three days.Source: RADARSAT, 1999; Space Imaging, 1999c.

RADARSAT-2RADARSAT-2, launched in 2007, is a SAR satellite developed by the Canadian Space Agency and MacDonald, Dettwiler, and Associates, Ltd. (MDA). The satellite advancements include 3 meter high-resolution imaging, flexibility in polarization selection, left and right-looking imaging options, and superior data storage.In addition to RADARSAT-1 beam modes, RADARSAT-2 offers Ultra-Fine, Multi-Look Fine, Fine Quad-Pol, and Standard Quad-Pol beam modes. Quadrature-polarization means that four images are acquired simultaneously; two co-polarized images (HH and VV) and two cross-polarized images (HV and VH).

Table 21: RADARSAT Beam Mode Resolution

Beam Mode Resolution

Fine Beam Mode 8 m

Standard Beam Mode 25 m

Wide Beam Mode 30 m

ScanSAR Narrow Beam Mode 50 m

ScanSAR Wide Beam Mode 100 m

Extended High Beam Mode 25 m

Low Beam Mode 35 m

Page 134: ERDAS Field Guide

104 Raster and Vector Data Sources

Source: RADARSAT-2, 2008.

SIR-ASIR stands for Spaceborne Imaging Radar. SIR-A was launched and collected data in 1981. The SIR-A mission built on the Seasat SAR mission that preceded it by increasing the incidence angle with which it captured images. The primary goal of the SIR-A mission was to collect geological information. This information did not have as pronounced a layover effect as previous imagery.An important achievement of SIR-A data is that it was capable of penetrating surfaces to obtain information. For example, NASA says that the L-band capability of SIR-A enabled the discovery of dry river beds in the Sahara Desert.SIR-1 used L-band, had a swath width of 50 km, a range resolution of 40 m, and an azimuth resolution of 40 m (Atlantis Scientific, Inc., 1997).

For information on the ERDAS IMAGINE software that reduces layover effect, IMAGINE OrthoRadar, see IMAGINE OrthoRadar Theory on page 657.

Source: National Aeronautics and Space Administration, 1995a; National Aeronautics and Space Administration, 1996; Atlantis Scientific, Inc., 1997.

Table 22: RADARSAT-2 Characteristics

Geometry of orbit near-polar, sun-synchronous

Orbit Altitude 798 km

Orbit Inclination 98.6 degrees

Orbit repeat cycle 24 days

Frequency Band C-band (5.405 GHz)

Channel Bandwidth 11.6, 17.3, 30, 50, 100 MHz

Channel Polarization HH, HV, VH, VV

Spatial Resolution 3 meters to 100 meters

Page 135: ERDAS Field Guide

Raster and Vector Data Sources 105

SIR-BSIR-B was launched and collected data in 1984. SIR-B improved over SIR-A by using an articulating antenna. This antenna allowed the incidence angle to range between 15 and 60 degrees. This enabled the mapping of surface features using “multiple-incidence angle backscatter signatures” (National Aeronautics and Space Administration, 1996).SIR-B used L-band, has a swath width of 10-60 km, a range resolution of 60-10 m, and an azimuth resolution of 25 m (Atlantis Scientific, Inc., 1997).Source: National Aeronautics and Space Administration, 1995a, National Aeronautics and Space Administration, 1996; Atlantis Scientific, Inc., 1997.

SIR-CSIR-C sensor was flown onboard two separate NASA Space Shuttle flights in 1994. Flight 1 was notable for a fully polarimetric spaceborne SAR, multi-frequency, X-band, and demonstrated ScanSAR for wide swath array. Flight 2 was notable for the first SAR to re-fly, targeted repeat-pass interferometry, and also demonstrated ScanSAR for wide swath array.Source: National Aeronautics and Space Administration, 2006.SIR-C is part of a radar system, SIR-C/X-SAR, which flew in 1994. The system is able to “. . .measure, from space, the radar signature of the surface at three different wavelengths, and to make measurements for different polarizations at two of those wavelengths” (National Aeronautics and Space Administration, 1997). Moreover, it can supply “. . .images of the magnitude of radar backscatter for four polarization combinations” (National Aeronautics and Space Administration, 1995a).The data provided by SIR-C/X-SAR allows measurement of the following:

• vegetation type, extent, and deforestation

• soil moisture content

• ocean dynamics, wave and surface wind speeds and directions

• volcanism and tectonic activity

• soil erosion and desertification

Page 136: ERDAS Field Guide

106 Raster and Vector Data Sources

The antenna of the system is composed of three antennas: one at L-band, one at C-band, and one at X-band. The antenna was assembled by the Jet Propulsion Laboratory. The acquisition of data at three different wavelengths makes SIR-C/X-SAR data very useful. The SIR-C and X-SAR do not have to be operated together: they can also be operated independent of one another.SIR-C/X-SAR data come in resolutions from 10 to 200 m. The swath width of the sensor varies from 15 to 90 km, which depends on the direction the antenna is pointing. The system orbited the Earth at 225 km above the surface.

Source: National Aeronautics and Space Administration, 1995a, National Aeronautics and Space Administration, 1997.

TerraSAR-XTerraSAR-X, launched in 2007, is a German satellite manufactured in a public private partnership between the German Aerospace Center (DLR), Astrium GmbH, and the German Ministry of Education and Science (BMBF). TerraSAR-X carries a high frequency X-band SAR instrument based on an active phased array antenna technology. The satellite orbit is sun-synchronous at 514 km altitude at 98 degrees inclination and 11 days repeat cycle. The satellite sensor operates in several modes; Spotlight, high Resolution Spotlight, Stripmap, and ScanSAR, at varying geometrical resolutions between 1 and 16 meters. It provides single or dual polarization data.

Table 23: SIR-C/X-SAR Bands and Frequencies

Bands Wavelength

L-Band 0.235 m

C-Band 0.058 m

X-Band 0.031 m

Table 24: TerraSAR-X Imaging Characteristics

Mode Swath Resolution

Spotlight (SL) 10 x 10 km scene 1 - 3 meters

High Resolution Spotlight (HS)

5 km x 10 km scene 1 - 2 meters

Stripmap (SM) 30 km strip 3 - 6 meters

ScanSAR (SC) 100 km strip 16 meters

Page 137: ERDAS Field Guide

Raster and Vector Data Sources 107

Source: DLR (German Aerospace Center). 2008.

Image Data from Aircraft

Image data can also be acquired from multispectral scanners or radar sensors aboard aircraft, as well as satellites. This is useful if there is not time to wait for the next satellite to pass over a particular area, or if it is necessary to achieve a specific spatial or spectral resolution that cannot be attained with satellite sensors.For example, this type of data can be beneficial in the event of a natural or man-made disaster, because there is more control over when and where the data are gathered.Two common types of airborne image data are:

• Airborne Synthetic Aperture Radar (AIRSAR)

• Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)

Aircraft Radar Imagery

AIRSARAIRSAR was an experimental airborne radar sensor developed by JPL, Pasadena, California, under a contract with NASA. The AIRSAR mission extended from 1988 to 2004. AIRSAR was an imaging tool mounted aboard a modified NASA DC-8 aircraft.This sensor collected data at three frequencies:

• C-band

• L-band

• P-band

Source: National Aeronautics and Space Administration, 2008.Because this sensor measured at three different wavelengths, different scales of surface roughness were obtained. The AIRSAR sensor had an IFOV of 10 m and a swath width of 12 km. AIRSAR data have been used in many applications such as measuring snow wetness, classifying vegetation, and estimating soil moisture.

NOTE: These data are distributed in a compressed format. They must be decompressed before loading with an algorithm available from JPL. See Addresses to Contact on page 128 for contact information.

Page 138: ERDAS Field Guide

108 Raster and Vector Data Sources

Aircraft Optical Imagery

AVIRISThe AVIRIS was also developed by JPL under a contract with NASA. AVIRIS data have been available since 1992.This sensor produces multispectral data that have 224 narrow bands. These bands are 10 nm wide and cover the spectral range of .4 - 2.4 nm. The swath width is 11 km, and the spatial resolution is 20 m. This sensor is flown at an altitude of approximately 20 km. The data are recorded at 10-bit radiometric resolution.

Daedalus TMSDaedalus is a thematic mapper simulator (TMS), which simulates the characteristics, such as spatial and radiometric, of the TM sensor on Landsat spacecraft. The Daedalus TMS orbits at 65,000 feet, and has a ground resolution of 25 meters. The total scan angle is 43 degrees, and the swath width is 15.6 km. Daedalus TMS is flown aboard the NASA ER-2 aircraft.The Daedalus TMS spectral bands are as follows:

Source: National Aeronautics and Space Administration, 1995b

Table 25: Daedalus TMS Bands and Wavelengths

Daedalus Channel TM Band Wavelength

1 A 0.42 to 0.45 μm

2 1 0.45 to 0.52 μm

3 2 0.52 to 0.60 μm

4 B 0.60 to 0.62 μm

5 3 0.63 to 0.69 μm

6 C 0.69 to 0.75 μm

7 4 0.76 to 0.90 μm

8 D 0.91 to 1.05 μm

9 5 1.55 to 1.75 μm

10 7 2.08 to 2.35 μm

11 6 8.5 to 14.0 μm low gain

12 6 8.5 to 14.0 μm high gain

Page 139: ERDAS Field Guide

Raster and Vector Data Sources 109

Image Data from Scanning

Hardcopy maps and photographs can be incorporated into the ERDAS IMAGINE environment through the use of a scanning device to transfer them into a digital (raster) format.In scanning, the map, photograph, transparency, or other object to be scanned is typically placed on a flat surface, and the scanner scans across the object to record the image. The image is then transferred from analog to digital data. There are many commonly used scanners for GIS and other desktop applications, such as Eikonix (Eikonix Corp., Huntsville, Alabama) or Vexcel (Vexcel Imaging Corp., Boulder, Colorado). Many scanners produce a Tagged Image File Format (TIFF) file, which can be used directly by ERDAS IMAGINE.

Use the Import/Export function to import scanned data.Eikonix data can be obtained in the ERDAS IMAGINE .img format using the XSCAN™ Tool by Ektron and then imported directly into ERDAS IMAGINE.

Photogrammetric Scanners

There are photogrammetric high quality scanners and desktop scanners. Photogrammetric quality scanners are special devices capable of high image quality and excellent positional accuracy. Use of this type of scanner results in geometric accuracies similar to traditional analog and analytical photogrammetric instruments. These scanners are necessary for digital photogrammetric applications that have high accuracy requirements.These units usually scan only film because film is superior to paper, both in terms of image detail and geometry. These units usually have a Root Mean Square Error (RMSE) positional accuracy of 4 microns or less, and are capable of scanning at a maximum resolution of 5 to 10 microns.The required pixel resolution varies depending on the application. Aerial triangulation and feature collection applications often scan in the 10 to 15 micron range. Orthophoto applications often use 15- to 30-micron pixels. Color film is less sharp than panchromatic, therefore color ortho applications often use 20- to 40-micron pixels.

Desktop Scanners Desktop scanners are general purpose devices. They lack the image detail and geometric accuracy of photogrammetric quality units, but they are much less expensive. When using a desktop scanner, you should make sure that the active area is at least 9 × 9 inches (that is, A3-type scanners), enabling you to capture the entire photo frame.

Page 140: ERDAS Field Guide

110 Raster and Vector Data Sources

Desktop scanners are appropriate for less rigorous uses, such as digital photogrammetry in support of GIS or remote sensing applications. Calibrating these units improves geometric accuracy, but the results are still inferior to photogrammetric units. The image correlation techniques that are necessary for automatic tie point collection and elevation extraction are often sensitive to scan quality. Therefore, errors can be introduced into the photogrammetric solution that are attributable to scanning errors.

Aerial Photography Aerial photographs, such as NAPP photos, are most widely used data sources in photogrammetry. They can not be utilized in softcopy or digital photogrammetric applications until scanned. The standard dimensions of the aerial photos are 9 × 9 inches or 230 × 230 mm. The ground area covered by the photo depends on the scale. The scanning resolution determines the digital image file size and pixel size. For example, for a 1:40,000 scale standard block of white aerial photos scanned at 25 microns (1016 dots per inch), the ground pixel size is 1 × 1 m2. The resulting file size is about 85 MB. It is not recommended to scan a photo with a scanning resolution less than 5 microns or larger than 5080 dpi.

DOQs DOQ stands for digital orthophoto quadrangle. USGS defines a DOQ as a computer-generated image of an aerial photo, which has been orthorectified to give it map coordinates. DOQs can provide accurate map measurements. The format of the DOQ is a grayscale image that covers 3.75 minutes of latitude by 3.75 minutes of longitude. DOQs use the North American Datum of 1983, and the Universal Transverse Mercator projection. Each pixel of a DOQ represents a square meter. 3.75-minute quarter quadrangles have a 1:12,000 scale. 7.5-minute quadrangles have a 1:24,000 scale. Some DOQs are available in color-infrared, which is especially useful for vegetation monitoring.DOQs can be used in land use and planning, management of natural resources, environmental impact assessments, and watershed analysis, among other applications. A DOQ can also be used as “a cartographic base on which to overlay any number of associated thematic layers for displaying, generating, and modifying planimetric data or associated data files” (United States Geological Survey, 1999b).According to the USGS:

DOQ production begins with an aerial photo and requires four elements: (1) at least three ground positions that can be identified within the photo; (2) camera calibration specifications, such as focal length; (3) a digital elevation model (DEM) of the area covered by the photo; (4) and a high-resolution digital image of the photo, produced by scanning. The photo is processed pixel by pixel to produce an image with features in true geographic positions (United States Geological Survey, 1999b).

Page 141: ERDAS Field Guide

Raster and Vector Data Sources 111

Source: United States Geological Survey, 1999b.

ADRG Data ADRG (ARC Digitized Raster Graphic) data come from the National Imagery and Mapping Agency (NIMA), which was formerly known as the Defense Mapping Agency (DMA). ADRG data are primarily used for military purposes by defense contractors. The data are in 128 × 128 pixel tiled, 8-bit format stored on CD-ROM. ADRG data provide large amounts of hardcopy graphic data without having to store and maintain the actual hardcopy graphics.ADRG data consist of digital copies of NIMA hardcopy graphics transformed into the ARC system and accompanied by ASCII encoded support files. These digital copies are produced by scanning each hardcopy graphic into three images: red, green, and blue. The data are scanned at a nominal collection interval of 100 microns (254 lines per inch). When these images are combined, they provide a 3-band digital representation of the original hardcopy graphic.

ARC System The ARC system (Equal Arc-Second Raster Chart/Map) provides a rectangular coordinate and projection system at any scale for the Earth’s ellipsoid, based on the World Geodetic System 1984 (WGS 84). The ARC System divides the surface of the ellipsoid into 18 latitudinal bands called zones. Zones 1 - 9 cover the Northern hemisphere and zones 10 - 18 cover the Southern hemisphere. Zone 9 is the North Polar region. Zone 18 is the South Polar region.

Distribution RectanglesFor distribution, ADRG are divided into geographic data sets called Distribution Rectangles (DRs). A DR may include data from one or more source charts or maps. The boundary of a DR is a geographic rectangle that typically coincides with chart and map neatlines.

Zone Distribution RectanglesEach DR is divided into Zone Distribution Rectangles (ZDRs). There is one ZDR for each ARC System zone covering any part of the DR. The ZDR contains all the DR data that fall within that zone’s limits. ZDRs typically overlap by 1,024 rows of pixels, which allows for easier mosaicking. Each ZDR is stored on the CD-ROM as a single raster image file (.IMG). Included in each image file are all raster data for a DR from a single ARC System zone, and padding pixels needed to fulfill format requirements. The padding pixels are black and have a zero value.

The padding pixels are not imported by ERDAS IMAGINE, nor are they counted when figuring the pixel height and width of each image.

Page 142: ERDAS Field Guide

112 Raster and Vector Data Sources

ADRG File Format Each CD-ROM contains up to eight different file types which make up the ADRG format. ERDAS IMAGINE imports three types of ADRG data files:

• .OVR (Overview)

• .IMG (Image)

• .Lxx (Legend or marginalia data)

NOTE: Compressed ADRG (CADRG) is a different format, which may be imported or read directly.

The ADRG .IMG and .OVR file formats are different from the ERDAS IMAGINE .img and .ovr file formats.

.OVR (overview) The overview file contains a 16:1 reduced resolution image of the whole DR. There is an overview file for each DR on a CD-ROM.

Importing ADRG SubsetsSince DRs can be rather large, it may be beneficial to import a subset of the DR data for the application. ERDAS IMAGINE enables you to define a subset of the data from the preview image (see Figure 30).

You can import from only one ZDR at a time. If a subset covers multiple ZDRs, they must be imported separately and mosaicked with the Mosaic option.

Figure 29: ADRG Overview File Displayed in a Viewer

Page 143: ERDAS Field Guide

Raster and Vector Data Sources 113

The white rectangle in Figure 30 represents the DR. The subset area in this illustration would have to be imported as three files: one for each zone in the DR.Notice how the ZDRs overlap. Therefore, the .IMG files for Zones 2 and 4 would also be included in the subset area.

Figure 30: Subset Area with Overlapping ZDRs

.IMG (scanned image data)

The .IMG files are the data files containing the actual scanned hardcopy graphic(s). Each .IMG file contains one ZDR plus padding pixels. The Import function converts the .IMG data files on the CD-ROM to the ERDAS IMAGINE file format (.img). The image file can then be displayed in a Viewer.

.Lxx (legend data) Legend files contain a variety of diagrams and accompanying information. This is information that typically appears in the margin or legend of the source graphic.

This information can be imported into ERDAS IMAGINE and viewed. It can also be added to a map composition with the ERDAS IMAGINE Map Composer.

Each legend file contains information based on one of these diagram types:

• Index (IN)—shows the approximate geographical position of the graphic and its relationship to other graphics in the region.

Zone 4

Zone 3

Zone 2

overlap area

overlap areaSubset

Area

Page 144: ERDAS Field Guide

114 Raster and Vector Data Sources

• Elevation/Depth Tint (EL)—depicts the colors or tints using a multicolored graphic that represent different elevations or depth bands on the printed map or chart.

• Slope (SL)—represents the percent and degree of slope appearing in slope bands.

• Boundary (BN)—depicts the geopolitical boundaries included on the map or chart.

• Accuracy (HA, VA, AC)—depicts the horizontal and vertical accuracies of selected map or chart areas. AC represents a combined horizontal and vertical accuracy diagram.

• Geographic Reference (GE)—depicts the positioning information as referenced to the World Geographic Reference System.

• Grid Reference (GR)—depicts specific information needed for positional determination with reference to a particular grid system.

• Glossary (GL)—gives brief lists of foreign geographical names appearing on the map or chart with their English-language equivalents.

• Landmark Feature Symbols (LS)—depict navigationally-prominent entities.

ARC System ChartsThe ADRG data on each CD-ROM are based on one of these chart types from the ARC system:

Table 26: ARC System Chart Types

ARC System Chart Type Scale

GNC (Global Navigation Chart) 1:5,000,000

JNC-A (Jet Navigation Chart - Air) 1:3,000,000

JNC (Jet Navigation Chart) 1:2,000,000

ONC (Operational Navigation Chart) 1:1,000,000

TPC (Tactical Pilot Chart) 1:500,000

JOG-A (Joint Operations Graphic - Air) 1:250,000

JOG-G (Joint Operations Graphic - Ground) 1:250,000

JOG-C (Joint Operations Graphic - Combined) 1:250,000

JOG-R (Joint Operations Graphic - Radar) 1:250,000

Page 145: ERDAS Field Guide

Raster and Vector Data Sources 115

Each ARC System chart type has certain legend files associated with the image(s) on the CD-ROM. The legend files associated with each chart type are checked in Table 27.

ADRG File Naming Convention

The ADRG file naming convention is based on a series of codes: ssccddzz

• ss = the chart series code (see the table of ARC System charts)

• cc = the country code

• dd = the DR number on the CD-ROM (01-99). DRs are numbered beginning with 01 for the northwesternmost DR and increasing sequentially west to east, then north to south.

• zz = the zone rectangle number (01-18)

For example, in the ADRG filename JNUR0101.IMG:

• JN = Jet Navigation. This ADRG file is taken from a Jet Navigation chart.

ATC (Series 200 Air Target Chart) 1:200,000

TLM (Topographic Line Map) 1:50,000

Table 26: ARC System Chart Types

ARC System Chart Type Scale

Table 27: Legend Files for the ARC System Chart Types

ARC System Chart IN EL SL BN VA HA AC GE GR GL LS

GNC • •

JNC / JNC-A • • • • •

ONC • • • • •

TPC • • • • • •

JOG-A • • • • • • • •

JOG-G / JOG-C • • • • • • •

JOG-R • • • • • •

ATC • • • • •

TLM • • • • • •

Page 146: ERDAS Field Guide

116 Raster and Vector Data Sources

• UR = Europe. The data is coverage of a European continent.

• 01 = This is the first DR on the CD-ROM, providing coverage of the northwestern edge of the image area.

• 01 = This is the first zone rectangle of the DR.

• .IMG = This file contains the actual scanned image data for a ZDR.

You may change this name when the file is imported into ERDAS IMAGINE. If you do not specify a file name, ERDAS IMAGINE uses the ADRG file name for the image.

Legend File NamesLegend file names include a code to designate the type of diagram information contained in the file (see the previous legend file description). For example, the file JNUR01IN.L01 means:

• JN = Jet Navigation. This ADRG file is taken from a Jet Navigation chart.

• UR = Europe. The data is coverage of a European continent.

• 01 = This is the first DR on the CD-ROM, providing coverage of the northwestern edge of the image area.

• IN = This indicates that this file is an index diagram from the original hardcopy graphic.

• .L01 = This legend file contains information for the source graphic 01. The source graphics in each DR are numbered beginning with 01 for the northwesternmost source graphic, increasing sequentially west to east, then north to south. Source directories and their files include this number code within their names.

For more detailed information on ADRG file naming conventions, see the National Imagery and Mapping Agency Product Specifications for ARC Digitized Raster Graphics (ADRG), published by the NIMA Aerospace Center.

ADRI Data ADRI (ARC Digital Raster Imagery), like ADRG data, are also from the NIMA and are currently available only to Department of Defense contractors. The data are in 128 × 128 tiled, 8-bit format, stored on 8 mm tape in band sequential format.

Page 147: ERDAS Field Guide

Raster and Vector Data Sources 117

ADRI consists of SPOT panchromatic satellite imagery transformed into the ARC system and accompanied by ASCII encoded support files. Like ADRG, ADRI data are stored in the ARC system in DRs. Each DR consists of all or part of one or more images mosaicked to meet the ARC bounding rectangle, which encloses a 1 degree by 1 degree geographic area. (See Figure 31.) Source images are orthorectified to mean sea level using NIMA Level I Digital Terrain Elevation Data (DTED) or equivalent data (Air Force Intelligence Support Agency, 1991).

See the previous section on ADRG data for more information on the ARC system. See DTED on page 123 for more information.

Figure 31: Seamless Nine Image DR

In ADRI data, each DR contains only one ZDR. Each ZDR is stored as a single raster image file, with no overlapping areas. There are six different file types that make up the ADRI format: two types of data files, three types of header files, and a color test patch file. ERDAS IMAGINE imports two types of ADRI data files:

• .OVR (Overview)

• .IMG (Image)

Image 1 Image 2

Image 4

Image 8

Image 5

Image 6

Image 9

7

3

Page 148: ERDAS Field Guide

118 Raster and Vector Data Sources

The ADRI .IMG and .OVR file formats are different from the ERDAS IMAGINE .img and .ovr file formats.

.OVR (overview) The overview file (.OVR) contains a 16:1 reduced resolution image of the whole DR. There is an overview file for each DR on a tape. The .OVR images show the mosaicking from the source images and the dates when the source images were collected. (See Figure 32.) This does not appear on the ZDR image.

Figure 32: ADRI Overview File Displayed in a Viewer

.IMG (scanned image data)

The .IMG files contain the actual mosaicked images. Each .IMG file contains one ZDR plus any padding pixels needed to fit the ARC boundaries. Padding pixels are black and have a zero data value. The ERDAS IMAGINE Import function converts the .IMG data files to the ERDAS IMAGINE file format (.img). The image file can then be displayed in a Viewer. Padding pixels are not imported, nor are they counted in image height or width.

ADRI File Naming Convention

The ADRI file naming convention is based on a series of codes: ssccddzz

• ss = the image source code:

- SP (SPOT panchromatic)- SX (SPOT multispectral) (not currently available)- TM (Landsat Thematic Mapper) (not currently available)

• cc = the country code

Page 149: ERDAS Field Guide

Raster and Vector Data Sources 119

• dd = the DR number on the tape (01-99). DRs are numbered beginning with 01 for the northwesternmost DR and increasing sequentially west to east, then north to south.

• zz = the zone rectangle number (01-18)

For example, in the ADRI filename SPUR0101.IMG:

• SP = SPOT 10 m panchromatic image

• UR = Europe. The data is coverage of a European continent.

• 01 = This is the first Distribution Rectangle on the CD-ROM, providing coverage of the northwestern edge of the image area

• 01 = This is the first zone rectangle of the Distribution Rectangle.

• .IMG = This file contains the actual scanned image data for a ZDR.

You may change this name when the file is imported into ERDAS IMAGINE. If you do not specify a file name, ERDAS IMAGINE uses the ADRI file name for the image.

Raster Product Format

The Raster Product Format (RPF), from NIMA, is primarily used for military purposes by defense contractors. RPF data are organized in 1536 × 1536 frames, with an internal tile size of 256 × 256 pixels. RPF data are stored in an 8-bit format, with or without a pseudocolor lookup table, on CD-ROM.RPF Data are projected to the ARC system, based on the World Geodetic System 1984 (WGS 84). The ARC System divides the surface of the ellipsoid into 18 latitudinal bands called zones. Zones 1-9 cover the Northern hemisphere and zones A-J cover the Southern hemisphere. Zone 9 is the North Polar region. Zone J is the South Polar regionPolar data is projected to the Azimuthal Equidistant projection. In nonpolar zones, data is in the Equirectangular projection, which is proportional to latitude and longitude. ERDAS IMAGINE includes the option to use either Equirectangular or Geographic coordinates for nonpolar RPF data. The aspect ratio of projected RPF data is nearly 1; frames appear to be square, and measurement is possible. Unprojected RPFs seldom have an aspect of ratio of 1, but may be easier to combine with other data in Geographic coordinates.Two military products are currently based upon the general RPF specification:

Page 150: ERDAS Field Guide

120 Raster and Vector Data Sources

• Controlled Image Base (CIB)

• Compressed ADRG (CADRG)

RPF employs Vector Quantization (VQ) to compress the frames. A vector is a 4 × 4 tile of 8-bit pixel values. VQ evaluates all of the vectors within the image, and reduces each vector into a single 12-bit lookup value. Since only 4096 unique vector values are possible, VQ is lossy, but the space savings are substantial. Most of the processing effort of VQ is incurred in the compression stage, permitting fast decompression by the users of the data in the field.RPF data are stored on CD-ROM, with the following structure:

• The root of the CD-ROM contains an RPF directory. This RPF directory is often referred to as the root of the product.

• The RPF directory contains a table-of-contents file, named A.TOC, which describes the location of all of the frames in the product, and

• The RPF directory contains one or more subdirectories containing RPF frame files. RPF frame file names typically encode the map zone and location of the frame within the map series.

• Overview images may appear at various points in the directory tree. Overview images illustrate the location of a set of frames with respect to political and geographic boundaries. Overview images typically have an .OVx file extension, such as .OVR or .OV1.

All RPF frames, overview images, and table-of-contents files are physically formatted within an NITF message. Since an RPF image is broken up into several NITF messages, ERDAS IMAGINE treats RPF and NITF as distinct formats.

Loading RPF DataRPF frames may be imported or read directly. The direct read feature, included in ERDAS IMAGINE, is generally preferable since multiple frames with the same resolution can be read as a single image. Import may still be desirable if you wish to examine the metadata provided by a specific frame. ERDAS IMAGINE supplies four image types related to RPF:

• RPF Product—combines the entire contents of an RPF CD, excluding overview images, as a single image, provided all frames are within the same ARC map zone and resolution.The RPF directory at the root of the CD-ROM is the image to be loaded.

• RPF Frame—reads a single frame file.

• RPF Overview—reads a single overview frame file.

Page 151: ERDAS Field Guide

Raster and Vector Data Sources 121

CIB CIB is grayscale imagery produced from rectified imagery and physically formatted as a compressed RPF. CIB offers a compression ratio of 8:1 over its predecessor, ADRI. CIB is often based upon SPOT panchromatic data or reformatted ADRI data, but can be produced from other sources of imagery.

CADRG CADRG data consist of digital copies of NIMA hardcopy graphics transformed into the ARC system. The data are scanned at a nominal collection interval of 150 microns. The resulting image is 8-bit pseudocolor, which is physically formatted as a compressed RPF.CADRG is a successor to ADRG, Compressed Aeronautical Chart (CAC), and Compressed Raster Graphics (CRG). CADRG offers a compression ratio of 55:1 over ADRG, due to the coarser collection interval, VQ compression, and the encoding as 8-bit pseudocolor, instead of 24-bit truecolor.

Topographic Data Satellite data can also be used to create elevation, or topographic data through the use of stereoscopic pairs, as discussed above under SPOT. Radar sensor data can also be a source of topographic information, as discussed in "Terrain Analysis" on page 645. However, most available elevation data are created with stereo photography and topographic maps. ERDAS IMAGINE software can load and use:

• USGS DEMs

• DTED

Arc/second Format Most elevation data are in arc/second format. Arc/second refers to data in the Latitude/Longitude (Lat/Lon) coordinate system. The data are not rectangular, but follow the arc of the Earth’s latitudinal and longitudinal lines. Each degree of latitude and longitude is made up of 60 minutes. Each minute is made up of 60 seconds. Arc/second data are often referred to by the number of seconds in each pixel. For example, 3 arc/second data have pixels which are 3 × 3 seconds in size. The actual area represented by each pixel is a function of its latitude. Figure 33 illustrates a 1° × 1° area of the Earth.A row of data file values from a DEM or DTED file is called a profile. The profiles of DEM and DTED run south to north, that is, the first pixel of the record is the southernmost pixel.

Page 152: ERDAS Field Guide

122 Raster and Vector Data Sources

Figure 33: Arc/second Format

In Figure 33, there are 1201 pixels in the first row and 1201 pixels in the last row, but the area represented by each pixel increases in size from the top of the file to the bottom of the file. The extracted section in the example above has been exaggerated to illustrate this point. Arc/second data used in conjunction with other image data, such as TM or SPOT, must be rectified or projected onto a planar coordinate system such as UTM.

DEM DEMs are digital elevation model data. DEM was originally a term reserved for elevation data provided by the USGS, but it is now used to describe any digital elevation data.DEMs can be:

• purchased from USGS (for US areas only)

• created from stereopairs (derived from satellite data or aerial photographs)

See "Terrain Analysis" on page 645 for more information on using DEMs. See Ordering Raster Data on page 127 for information on ordering DEMs.

USGS DEMsIn 2006, the USGS began offering the National Elevation Dataset (NED). NED has been developed by merging the highest-resolution elevation data available across the United States into a seamless raster format. The dataset provides seamless coverage of the United States, Hawaii, Alaska, and the island territories.

Arc/Second Format

1201

1201

1201

Lati tude

Long

itude

Page 153: ERDAS Field Guide

Raster and Vector Data Sources 123

Source: United States Geological Survey, 2006.There are two types of historic DEMs that are most commonly available from USGS:

• 1:24,000 scale, also called 7.5-minute DEM, is usually referenced to the UTM coordinate system. It has a spatial resolution of 30 × 30 m.

• 1:250,000 scale is available only in Arc/second format.

Both types have a 16-bit range of elevation values, meaning each pixel can have a possible elevation of -32,768 to 32,767. DEM data are stored in ASCII format. The data file values in ASCII format are stored as ASCII characters rather than as zeros and ones like the data file values in binary data. DEM data files from USGS are initially oriented so that North is on the right side of the image instead of at the top. ERDAS IMAGINE rotates the data 90° counterclockwise as part of the Import process so that coordinates read with any ERDAS IMAGINE program are correct.

DTED DTED data are produced by the National Imagery and Mapping Agency (NIMA) and are available only to US government agencies and their contractors. DTED data are distributed on 9-track tapes and on CD-ROM.There are two types of DTED data available:

• DTED 1 — a 1° × 1° area of coverage

• DTED 2 — a 1° × 1° or less area of coverage

Both are in Arc/second format and are distributed in cells. A cell is a 1° × 1° area of coverage. Both have a 16-bit range of elevation values. Like DEMs, DTED data files are also oriented so that North is on the right side of the image instead of at the top. ERDAS IMAGINE rotates the data 90° counterclockwise as part of the Import process so that coordinates read with any ERDAS IMAGINE program are correct.

Using Topographic Data Topographic data have many uses in a GIS. For example, topographic data can be used in conjunction with other data to:

• calculate the shortest and most navigable path over a mountain range

• assess the visibility from various lookout points or along roads

• simulate travel through a landscape

• determine rates of snow melt

Page 154: ERDAS Field Guide

124 Raster and Vector Data Sources

• orthocorrect satellite or airborne images

• create aspect and slope layers

• provide ancillary data from image classification

See "Terrain Analysis" on page 645 for more information about using topographic and elevation data.

GPS DataIntroduction Global Positioning System (GPS) data has been in existence since the

launch of the first satellite in the US Navigation System with Time and Ranging (NAVSTAR) system on February 22, 1978, and the availability of a full constellation of satellites since 1994. Initially, the system was available to US military personnel only, but from 1993 onwards the system started to be used (in a degraded mode) by the general public. There is also a Russian GPS system called GLONASS with similar capabilities.The US NAVSTAR GPS consists of a constellation of 24 satellites orbiting the Earth, broadcasting data that allows a GPS receiver to calculate its spatial position.

Satellite Position Positions are determined through the traditional ranging technique. The satellites orbit the Earth (at an altitude of 20,200 km) in such a manner that several are always visible at any location on the Earth's surface. A GPS receiver with line of site to a GPS satellite can determine how long the signal broadcast by the satellite has taken to reach its location, and therefore can determine the distance to the satellite. Thus, if the GPS receiver can see three or more satellites and determine the distance to each, the GPS receiver can calculate its own position based on the known positions of the satellites (that is, the intersection of the spheres of distance from the satellite locations). Theoretically, only three satellites should be required to find the 3D position of the receiver, but various inaccuracies (largely based on the quality of the clock within the GPS receiver that is used to time the arrival of the signal) mean that at least four satellites are generally required to determine a three-dimensional (3D) x, y, z position.

Page 155: ERDAS Field Guide

Raster and Vector Data Sources 125

The explanation above is an over-simplification of the technique used, but does show the concept behind the use of the GPS system for determining position. The accuracy of that position is affected by several factors, including the number of satellites that can be seen by a receiver, but especially for commercial users by Selective Availability. Each satellite actually sends two signals at different frequencies. One is for civilian use and one for military use. The signal used for commercial receivers has an error introduced to it called Selective Availability. Selective Availability introduces a positional inaccuracy of up to 100m to commercial GPS receivers. This is mainly intended to limit the use of highly accurate GPS positioning to hostile users, but the errors can be ameliorated through various techniques, such as keeping the GPS receiver stationary; thereby allowing it to average out the errors, or through more advanced techniques discussed in the following sections.

Differential Correction Differential Correction (or Differential GPS - DGPS) can be used to remove the majority of the effects of Selective Availability. The technique works by using a second GPS unit (or base station) that is stationary at a precisely known position. As this GPS knows where it actually is, it can compare this location with the position it calculates from GPS satellites at any particular time and calculate an error vector for that time (that is, the distance and direction that the GPS reading is in error from the real position). A log of such error vectors can then be compared with GPS readings taken from the first, mobile unit (the field unit that is actually taking GPS location readings of features). Under the assumption that the field unit had line of site to the same GPS satellites to acquire its position as the base station, each field-read position (with an appropriate time stamp) can be compared to the error vector for that time and the position corrected using the inverse of the vector. This is generally performed using specialist differential correction software.Real Time Differential GPS (RDGPS) takes this technique one step further by having the base station communicate the error vector via radio to the field unit in real time. The field unit can then automatically updates its own location in real time. The main disadvantage of this technique is that the range that a GPS base station can broadcast over is generally limited, thereby restricting the range the mobile unit can be used away from the base station. One of the biggest uses of this technique is for ocean navigation in coastal areas, where base stations have been set up along coastlines and around ports so that the GPS systems on board ships can get accurate real time positional information to help in shallow-water navigation.

Applications of GPS Data GPS data finds many uses in remote sensing and GIS applications, such as:

Page 156: ERDAS Field Guide

126 Raster and Vector Data Sources

• Collection of ground truth data, even spectral properties of real-world conditions at known geographic positions, for use in image classification and validation. The user in the field identifies a homogeneous area of identifiable land cover or use on the ground and records its location using the GPS receiver. These locations can then be plotted over an image to either train a supervised classifier or to test the validity of a classification.

• Moving map applications take the concept of relating the GPS positional information to your geographic data layers one step further by having the GPS position displayed in real time over the geographical data layers. Thus you take a computer out into the field and connect the GPS receiver to the computer, usually via the serial port. Remote sensing and GIS data layers are then displayed on the computer and the positional signal from the GPS receiver is plotted on top of them.

• GPS receivers can be used for the collection of positional information for known point features on the ground. If these can be identified in an image, the positional data can be used as Ground Control Points (GCPs) for geocorrecting the imagery to a map projection system. If the imagery is of high resolution, this generally requires differential correction of the positional data.

• DGPS data can be used to directly capture GIS data and survey data for direct use in a GIS or CAD system. In this regard the GPS receiver can be compared to using a digitizing tablet to collect data, but instead of pointing and clicking at features on a paper document, you are pointing and clicking on the real features to capture the information.

• Precision agriculture uses GPS extensively in conjunction with Variable Rate Technology (VRT). VRT relies on the use of a VRT controller box connected to a GPS and the pumping mechanism for a tank full of fertilizers/pesticides/seeds/water/and so forth. A digital polygon map (often derived from remotely sensed data) in the controller specifies a predefined amount to dispense for each polygonal region. As the tractor pulls the tank around the field the GPS logs the position that is compared to the map position in memory. The correct amount is then dispensed at that location. The aim of this process is to maximize yields without causing any environmental damage.

Page 157: ERDAS Field Guide

Raster and Vector Data Sources 127

• GPS is often used in conjunction with airborne surveys. The aircraft, as well as carrying a camera or scanner, has on board one or more GPS receivers tied to an inertial navigation system. As each frame is exposed precise information is captured (or calculated in post processing) on the x, y, z and roll, pitch, yaw of the aircraft. Each image in the aerial survey block thus has initial exterior orientation parameters which therefore minimizes the need for control in a block triangulation process.

Figure 34 shows some additional uses for GPS coordinates.

Figure 34: Common Uses of GPS Data

Source: Leick, 1990

Ordering Raster Data

Table 28 describes the different Landsat, SPOT, AVHRR, and DEM products that can be ordered. Information in this chart does not reflect all the products that are available, but only the most common types that can be imported into ERDAS IMAGINE.

Navigation on landNavigation on seasNavigation in the airNavigation in spaceHarbor navigationNavigation in riversNavigation of recreational vehiclesHigh precision kinematic surveys on the groundGuidance of robots and other machines

Cadastral surveying

Geodetic networkdensification

High precision aircraftpositioning

Photogrammetry without ground control

Monitoring deformation

Hydrographic surveys

Active control stations

GPS

24 HoursPer Day

World Wide

Table 28: Common Raster Data Products

Data Type Ground Covered Pixel Size # of

Bands Format

Landsat ETM+ 170 x 183 km 15 m - band 830 m - bands 1-5

and 760 m - bands 6H

and 6L

8 GeoTIFF

Landsat TM 170 × 183 km 30 m28.5 m

7 NLAPSGeoTIFF

Landsat MSS 170 × 185 km 79 × 56 m 4 NLAPS

SPOT 60 × 60 km 10 m and 20 m 1 - 3 BIL

Page 158: ERDAS Field Guide

128 Raster and Vector Data Sources

Addresses to Contact For more information about these and related products, contact the following agencies:

• IKONOS, GeoEye-1, OrbView-2 data: GeoEye21700 Atlantic BoulevardDulles, VA 20166 USATelephone: 703.480.7500Fax: 703.450.9570Internet: www.geoeye.com

• SPOT data: SPOT Image Corporation14595 Avion Parkway, Suite 500Chantilly, VA 20151 USATelephone: 703-715-3100 Fax: 703-715-3120Internet: www.spot.com

• NOAA AVHRR data: NOAA/National Environmental Satellite, Data, and Information Service (NESDIS)NOAA Central Library1315 East-West HighwaySSMC3, 2nd FloorSilver Spring, MD 20910 USAInternet: www.nesdis.noaa.gov

NOAA AVHRR (Local Area Cov-erage)

2700 × 2700 km

1.1 km 1 - 5 10-bit packed or unpacked

NOAA AVHRR (Global Area Cov-erage)

4000 × 4000 km

4 km 1 - 5 10-bit packed or unpacked

Historic USGS DEM 1:24,000 7.5’ × 7.5’ 30 m 1 ASCII

National Elevation Dataset (NED)assembled by USGSSource: http://ned.usgs.gov

30 m

Table 28: Common Raster Data Products

Data Type Ground Covered Pixel Size # of

Bands Format

Page 159: ERDAS Field Guide

Raster and Vector Data Sources 129

• AVHRR Dundee FormatNERC Satellite Receiving Station, Space Technology CentreUniversity of DundeeDundee, Scotland, UK DD1 4HNTelephone: +44 1382 38 4409Fax: +44 1382 202 575Internet: www.sat.dundee.ac.uk

• Cartographic data including: topographic maps, aerial photos, publications, satellite images, DEMs, planimetric data, and related information from federal, state, and private agencies: National Mapping DivisionU.S. Geological Survey, National Center12201 Sunrise Valley DriveReston, VA 20192 USATelephone: 703-648-4000Internet: www.usgs.gov/pubprod/index.html

• Landsat data:U.S. Geological SurveyNational Center for Earth Resource Observation & Science (EROS)47914 252nd StreetSioux Falls, SD 57198 USATelephone: 800-252-4547Telephone: 605-594-6151Internet: http://edc.usgs.gov

• ADRG/CADRG/ADRI data (available only to defense contractors):NGA (National Geospatial-Intelligence Agency) Defense Supply Center RichmondMapping Customer Operations (DSCR-FAN)8000 Jefferson Davis HighwayRichmond, VA 23297-5339 USATelephone: 804-279-6500Telephone: 800-826-0342Internet: www.dscr.dla.mil/rmf/

• ERS-1, ERS-2, IKONOS, Landsat, QuickBird, Envisat radar data: MDA Geospatial Services International13800 Commerce ParkwayRichmond, British ColumbiaCanada V6V 2J3Telephone: 604-244-0400Telephone: 888-780-6444Internet: www.gs.mdacorporation.com

Page 160: ERDAS Field Guide

130 Raster and Vector Data Sources

• RADARSAT data: MDA Geospatial Services International13800 Commerce ParkwayRichmond, British ColumbiaCanada V6V 2J3Telephone: 604-244-0400Telephone: 888-780-6444Internet: www.gs.mdacorporation.com

• ALOS and JERS-1 (Fuyo 1) radar data: Japan Aerospace Exploration Agency (JAXA)Earth Observation Center1401 Numaneoue, OhashiHatoyama-machi, Hiki-gunSaitama, Japan 350-0393Telephone: +81-49-298-1200 Internet: www.jaxa.jp

• SIR-A, B, C radar data: U.S. Geological Survey, National Center12201 Sunrise Valley DriveReston, VA 20192 USATelephone: 703-648-4000Internet: www.usgs.gov/pubprod/index.html

• Almaz radar data:NPO MashinostroeniaScientific Engineering Center “Almaz”33 Gagarin StreetReutov, 143952, RussiaTelephone: 7/095-538-3018Fax: 7/095-302-2001E:mail: [email protected]

• U.S. Government RADARSAT sales:NOAA Satellite and Information ServiceNational Environmental Satellite, Data, and Information Service (NESDIS)NOAA Central Library1315 East-West HighwaySSMC3, 2nd FloorSilver Spring, MD 20910 USAhttp://www.class.ncdc.noaa.gov/release/data_available/sar/index.htm

Page 161: ERDAS Field Guide

Raster and Vector Data Sources 131

Raster Data from Other Software Vendors

ERDAS IMAGINE also enables you to import data created by other software vendors. This way, if another type of digital data system is currently in use, or if data is received from another system, it easily converts to the ERDAS IMAGINE file format for use in ERDAS IMAGINE.Data from other vendors may come in that specific vendor’s format, or in a standard format which can be used by several vendors. The Import and/or Direct Read function handles these raster data types from other software systems:

• ERDAS Ver. 7.X

• GRID and GRID Stacks

• JFIF (JPEG)

• JPEG2000

• MrSID

• SDTS

• Sun Raster

• TIFF and GeoTIFF

Other data types might be imported using the Generic Binary import option.

Vector to Raster ConversionVector data can also be a source of raster data by converting it to raster format.

Convert a vector layer to a raster layer, or vice versa, by using IMAGINE Vector.

ERDAS Ver. 7.X The ERDAS Ver. 7.X series was the predecessor of ERDAS IMAGINE software. The two basic types of ERDAS Ver. 7.X data files are indicated by the file name extensions:

• .LAN—a multiband continuous image file (the name is derived from the Landsat satellite)

Page 162: ERDAS Field Guide

132 Raster and Vector Data Sources

• .GIS—a single-band thematic data file in which pixels are divided into discrete categories (the name is derived from geographic information system)

.LAN and .GIS image files are stored in the same format. The image data are arranged in a BIL format and can be 4-bit, 8-bit, or 16-bit. The ERDAS Ver. 7.X file structure includes:

• a header record at the beginning of the file

• the data file values

• a statistics or trailer file

When you import a .GIS file, it becomes an image file with one thematic raster layer. When you import a .LAN file, each band becomes a continuous raster layer within the image file.

GRID and GRID Stacks GRID is a raster geoprocessing program distributed by Environmental Systems Research Institute, Inc. (ESRI) in Redlands, California. GRID is designed to complement the vector data model system, ArcInfo is a well-known vector GIS that is also distributed by ESRI. The name GRID is taken from the raster data format of presenting information in a grid of cells.The data format for GRID is a compressed tiled raster data structure. Like ArcInfo Coverages, a GRID is stored as a set of files in a directory, including files to keep the attributes of the GRID.Each GRID represents a single layer of continuous or thematic imagery, but it is also possible to combine GRIDs files into a multilayer image. A GRID Stack (.stk) file names multiple GRIDs to be treated as a multilayer image. Starting with ArcInfo version 7.0, ESRI introduced the STK format, referred to in ERDAS software as GRID Stack 7.x, which contains multiple GRIDs. The GRID Stack 7.x format keeps attribute tables for the entire stack in a separate directory, in a manner similar to that of GRIDs and Coverages.

Page 163: ERDAS Field Guide

Raster and Vector Data Sources 133

JFIF (JPEG) JPEG is a set of compression techniques established by the Joint Photographic Experts Group (JPEG). The most commonly used form of JPEG involves Discrete Cosine Transformation (DCT), thresholding, followed by Huffman encoding. Since the output image is not exactly the same as the input image, this form of JPEG is considered to be lossy. JPEG can compresses monochrome imagery, but achieves compression ratios of 20:1 or higher with color (RGB) imagery, by taking advantage of the fact that the data being compressed is a visible image. The integrity of the source image is preserved by focussing its compression on aspects of the image that are less noticeable to the human eye. JPEG cannot be used on thematic imagery, due to the change in pixel values.There is a lossless form of JPEG compression that uses DCT followed by nonlossy encoding, but it is not frequently used since it only yields an approximate compression ratio of 2:1. ERDAS IMAGINE only handles the lossy form of JPEG.While JPEG compression is used by other file formats, including TIFF, the JPEG File Interchange Format (JFIF) is a standard file format used to store JPEG-compressed imagery.

Additional information on the JPEG standard can be found at http://www.jpeg.org/jpeg/index.html.

JPEG2000 JPEG2000 compression technique is a form of wavelet compression defined by the International Organization for Standards (ISO). JPEG2000 provides both a lossy and lossless encoding mode, with the lossless attaining only relatively low compression ratios but retaining the full accuracy of the input data. The lossy modes can attain very high compression ratios, but may possibly alter the data content. JPEG2000 is designed to retain the visual appearance of the input data as closely as possible even with high compression ratio lossy processing.

Specify Output File Size The concept of ECW and JPEG2000 format compression is that you are compressing to a specified quality level rather than a specified file size. Choose a level of quality that benefits your needs, and use the quality levels option to compress images within that quality range. The goal is visual similarity in quality levels amongst multiple files, not similar file size. Using a file-sized based compressor, the resultant image quality of each file compressed will be different (depending on file size, image features and so forth), and you will notice visible quality differences amongst the output images. Visible quality differences would not be the goal for an end product such as air photos over a common area.

Page 164: ERDAS Field Guide

134 Raster and Vector Data Sources

Currently there is no way of directly controlling the output file size when compressing images using the ECW or JPEG2000 file format and ERDAS products and SDKs. This is because the output size is affected not only by the compression algorithm used but also by the specific low-level character of the input data. Certain kinds of images are simply easier to compress than others.

Quality LevelsJPEG and JPEG2000 quality value ranges from 1 (lowest quality, highest compression) to 100 (highest quality, lowest compression). Values between 50 and 95 are normally used. Specifying a quality value of 100 creates a much larger image and slight increase in quality compared to a quality value of 95.

Numerically Lossless CompressionWhen compressing to JPEG2000 format, which supports lossless compressed images, numerically lossless compression is specified by selecting a target compression ratio of 1:1. This does not correspond to the actual compression rate, which will generally be higher (between 2:1 and 2.5:1).

Additional information on the JPEG2000 standard can be found at http://www.jpeg.org/jpeg2000.

MrSID Multiresolution Seamless Image Database (MrSID, pronounced Mister Sid) is a wavelet transform-based compression algorithm designed by LizardTech, Inc. in Seattle, Washington. The novel developments in MrSID include a memory efficient implementation and automatic inclusion of pyramid layers in every data set, both of which make MrSID well-suited to provide efficient storage and retrieval of very large digital images.The underlying wavelet-based compression methodology used in MrSID yields high compression ratios while satisfying stringent image quality requirements. The compression technique used in MrSID is lossy (that is, the compression-decompression process does not reproduce the source data pixel-for-pixel). Lossy compression is not appropriate for thematic imagery, but is essential for large continuous images since it allows much higher compression ratios than lossless methods (for example, the Lempel-Ziv-Welch, LZW, algorithm used in the GIF and TIFF image formats). At standard compression ratios, MrSID encoded imagery is visually lossless. On typical remotely sensed imagery, lossless methods provide compression ratios of perhaps 2:1, whereas MrSID provides excellent image quality at compression ratios of 30:1 or more.

Page 165: ERDAS Field Guide

Raster and Vector Data Sources 135

Additional information on the MrSID compression standard can be found at the LizardTech website at http://www.lizardtech.com.

SDTS The Spatial Data Transfer Standard (SDTS) was developed by the USGS to promote and facilitate the transfer of georeferenced data and its associated metadata between dissimilar computer systems without loss of fidelity. To achieve these goals, SDTS uses a flexible, self-describing method of encoding data, which has enough structure to permit interoperability.For metadata, SDTS requires a number of statements regarding data accuracy. In addition to the standard metadata, the producer may supply detailed attribute data correlated to any image feature.

SDTS ProfilesThe SDTS standard is organized into profiles. Profiles identify a restricted subset of the standard needed to solve a certain problem domain. Two subsets of interest to ERDAS IMAGINE users are:

• Topological Vector Profile (TVP), which covers attributed vector data. This is imported via the SDTS (Vector) title.

• SDTS Raster Profile and Extensions (SRPE), which covers gridded raster data. This is imported as SDTS Raster.

For more information on SDTS, consult the SDTS web site at http://mcmcweb.er.usgs.gov/sdts.

SUN Raster A SUN Raster file is an image captured from a monitor display. In addition to GIS, SUN Raster files can be used in desktop publishing applications or any application where a screen capture would be useful.There are two basic ways to create a SUN Raster file on a SUN workstation:

• use the OpenWindows Snapshot application

• use the UNIX screendump command

Both methods read the contents of a frame buffer and write the display data to a user-specified file. Depending on the display hardware and options chosen, screendump can create any of the file types listed in Table 29.

Page 166: ERDAS Field Guide

136 Raster and Vector Data Sources

The data are stored in BIP format.

TIFF TIFF was developed by Aldus Corp. (Seattle, Washington) in 1986 in conjunction with major scanner vendors who needed an easily portable file format for raster image data. Today, the TIFF format is a widely supported format used in video, fax transmission, medical imaging, satellite imaging, document storage and retrieval, and desktop publishing applications. In addition, the GeoTIFF extensions permit TIFF files to be geocoded.The TIFF format’s main appeal is its flexibility. It handles black and white line images, as well as gray scale and color images, which can be easily transported between different operating systems and computers.

TIFF File FormatsTIFF’s great flexibility can also cause occasional problems in compatibility. This is because TIFF is really a family of file formats that is comprised of a variety of elements within the format. Table 30 shows key Baseline TIFF format elements and the values for those elements supported by ERDAS IMAGINE.

Any TIFF file that contains an unsupported value for one of these elements may not be compatible with ERDAS IMAGINE.

Table 29: File Types Created by Screendump

File Type Available Compression

1-bit black and white None, RLE (run-length encoded)

8-bit color paletted (256 colors) None, RLE

24-bit RGB true color None, RLE

32-bit RGB true color None, RLE

Table 30: Common TIFF Format Elements

Byte Order Intel (LSB/MSB)

Motorola (MSB/LSB)

Image Type Black and white

Gray scale

Inverted gray scale

Color palette

Page 167: ERDAS Field Guide

Raster and Vector Data Sources 137

a All bands must contain the same number of bits (that is, 4, 4, 4 or 8, 8, 8). Multiband data with bit depths differing per band cannot be imported into ERDAS IMAGINE.

b Must be imported and exported as 4-bit data.

c Direct read/write only.

d Compression supported on import and direct read/write only.

Additional information on the TIFF specification can be found at Adobe Systems Inc. website http://partners.adobe.com/public/developer/tiff/index.html.

GeoTIFF According to the GeoTIFF Format Specification, Revision 1.0, "The GeoTIFF spec defines a set of TIFF tags provided to describe all ’Cartographic’ information associated with TIFF imagery that originates from satellite imaging systems, scanned aerial photography, scanned maps, digital elevation models, or as a result of geographic analysis" (Ritter and Ruth, 1995).The GeoTIFF format separates cartographic information into two parts: georeferencing and geocoding.

RGB (3-band)

Configuration BIP

BSQ

Bits Per Planea 1b, 2b, 4, 8, 16c, 32c, 64c

Compressiond None

CCITT G3 (B&W only)

CCITT G4 (B&W only)

Packbits

LZW

LZW with horizontal differencing

Table 30: Common TIFF Format Elements

Page 168: ERDAS Field Guide

138 Raster and Vector Data Sources

GeoreferencingGeoreferencing is the process of linking the raster space of an image to a model space (that is, a map system). Raster space defines how the coordinate system grid lines are placed relative to the centers of the pixels of the image. In ERDAS IMAGINE, the grid lines of the coordinate system always intersect at the center of a pixel. GeoTIFF allows the raster space to be defined either as having grid lines intersecting at the centers of the pixels (PixelIsPoint) or as having grid lines intersecting at the upper left corner of the pixels (PixelIsArea). ERDAS IMAGINE converts the georeferencing values for PixelIsArea images so that they conform to its raster space definition.GeoTIFF allows georeferencing via a scale and an offset, a full affine transformation, or a set of tie points. ERDAS IMAGINE currently ignores GeoTIFF georeferencing in the form of multiple tie points.

GeocodingGeocoding is the process of linking coordinates in model space to the Earth’s surface. Geocoding allows for the specification of projection, datum, ellipsoid, and so forth. ERDAS IMAGINE interprets the GeoTIFF geocoding to determine the latitude and longitude of the map coordinates for GeoTIFF images. This interpretation also allows the GeoTIFF image to be reprojected.In GeoTIFF, the units of the map coordinates are obtained from the geocoding, not from the georeferencing. In addition, GeoTIFF defines a set of standard projected coordinate systems. The use of a standard projected coordinate system in GeoTIFF constrains the units that can be used with that standard system. Therefore, if the units used with a projection in ERDAS IMAGINE are not equal to the implied units of an equivalent GeoTIFF geocoding, ERDAS IMAGINE transforms the georeferencing to conform to the implied units so that the standard projected coordinate system code can be used. The alternative (preserving the georeferencing as is and producing a nonstandard projected coordinate system) is regarded as less interoperable.

Additional information on the GeoTIFF specification can be found at http://www.remotesensing.org/geotiff/spec/geotiffhome.html.

Vector Data from Other Software Vendors

It is possible to directly import several common vector formats into ERDAS IMAGINE. These files become vector layers when imported. These data can then be used for the analyses and, in most cases, exported back to their original format (if desired).

Page 169: ERDAS Field Guide

Raster and Vector Data Sources 139

Although data can be converted from one type to another by importing a file into ERDAS IMAGINE and then exporting the ERDAS IMAGINE file into another format, the import and export routines were designed to work together. For example, if you have information in AutoCAD that you would like to use in the GIS, you can import a Drawing Interchange File (DXF) into ERDAS IMAGINE, do the analysis, and then export the data back to DXF format.In most cases, attribute data are also imported into ERDAS IMAGINE. Each of the following sections lists the types of attribute data that are imported.

Use Import/Export to import vector data from other software vendors into ERDAS IMAGINE vector layers. These routines are based on ArcInfo data conversion routines.

See "Vector Data" on page 41 for more information on ERDAS IMAGINE vector layers. See "Geographic Information Systems" on page 173 for more information about using vector data in a GIS.

ARCGEN ARCGEN files are ASCII files created with the ArcInfo UNGENERATE command. The import ARCGEN program is used to import features to a new layer. Topology is not created or maintained, therefore the coverage must be built or cleaned after it is imported into ERDAS IMAGINE.

ARCGEN files must be properly prepared before they are imported into ERDAS IMAGINE. If there is a syntax error in the data file, the import process may not work. If this happens, you must kill the process, correct the data file, and then try importing again.

See the ArcInfo documentation for more information about these files.

AutoCAD (DXF) AutoCAD is a vector software package distributed by Autodesk, Inc. (Sausalito, California). AutoCAD is a computer-aided design program that enables the user to draw two- and three-dimensional models. This software is frequently used in architecture, engineering, urban planning, and many other applications. AutoCAD DXF is the standard interchange format used by most CAD systems. The AutoCAD program DXFOUT creates a DXF file that can be converted to an ERDAS IMAGINE vector layer. AutoCAD files can also be output to IGES format using the AutoCAD program IGESOUT.

Page 170: ERDAS Field Guide

140 Raster and Vector Data Sources

See IGES on page 142 for more information about IGES files.

DXF files can be converted in the ASCII or binary format. The binary format is an optional format for AutoCAD Releases 10 and 11. It is structured just like the ASCII format, only the data are in binary format.DXF files are composed of a series of related layers. Each layer contains one or more drawing elements or entities. An entity is a drawing element that can be placed into an AutoCAD drawing with a single command. When converted to an ERDAS IMAGINE vector layer, each entity becomes a single feature. Table 31 describes how various DXF entities are converted to ERDAS IMAGINE.

The ERDAS IMAGINE import process also imports line and point attribute data (if they exist) and creates an INFO directory with the appropriate ACODE (arc attributes) and XCODE (point attributes) files. If an imported DXF file is exported back to DXF format, this information is also exported.

Refer to an AutoCAD manual for more information about the format of DXF files.

Table 31: Conversion of DXF Entries

DXF EntityERDAS

IMAGINEFeature

Comments

Line3DLine

Line These entities become two point lines. The initial Z value of 3D entities is stored.

TraceSolid

3DFace

Line These entities become four or five point lines. The initial Z value of 3D entities is stored.

CircleArc

Line These entities form lines. Circles are composed of 361 points—one vertex for each degree. The first and last point is at the same location.

Polyline Line These entities can be grouped to form a single line having many vertices.

PointShape

Point These entities become point features in a layer.

Page 171: ERDAS Field Guide

Raster and Vector Data Sources 141

DLG DLGs are furnished by the U.S. Geological Survey and provide planimetric base map information, such as transportation, hydrography, contours, and public land survey boundaries. DLG files are available for the following USGS map series:

• 7.5- and 15-minute topographic quadrangles

• 1:100,000-scale quadrangles

• 1:2,000,000-scale national atlas maps

DLGs are topological files that contain nodes, lines, and areas (similar to the points, lines, and polygons in an ERDAS IMAGINE vector layer). DLGs also store attribute information in the form of major and minor code pairs. Code pairs are encoded in two integer fields, each containing six digits. The major code describes the class of the feature (road, stream, and so forth) and the minor code stores more specific information about the feature.DLGs can be imported in standard format (144 bytes per record) and optional format (80 bytes per record). You can export to DLG-3 optional format. Most DLGs are in the Universal Transverse Mercator (UTM) map projection. However, the 1:2,000,000 scale series is in geographic coordinates.The ERDAS IMAGINE import process also imports point, line, and polygon attribute data (if they exist) and creates an INFO directory with the appropriate ACODE, PCODE (polygon attributes), and XCODE files. If an imported DLG file is exported back to DLG format, this information is also exported.

To maintain the topology of a vector layer created from a DLG file, you must Build or Clean it. See "Geographic Information Systems" on page 173 for information on this process.

ETAK ETAK’s MapBase is an ASCII digital street centerline map product available from ETAK, Inc. (Menlo Park, California). ETAK files are similar in content to the Dual Independent Map Encoding (DIME) format used by the U.S. Census Bureau. Each record represents a single linear feature with address and political, census, and ZIP code boundary information. ETAK has also included road class designations and, in some areas, major landmark features.There are four possible types of ETAK features:

• DIME or D types—if the feature type is D, a line is created along with a corresponding ACODE (arc attribute) record. The coordinates are stored in Lat/Lon decimal degrees.

Page 172: ERDAS Field Guide

142 Raster and Vector Data Sources

• Alternate address or A types—each record contains an alternate address record for a line. These records are written to the attribute file, and are useful for building address coverages.

• Shape features or S types—shape records are used to add vertices to the lines. The coordinates for these features are in Lat/Lon decimal degrees.

• Landmark or L types—if the feature type is L and you opt to output a landmark layer, then a point feature is created along with an associated PCODE record.

ERDAS IMAGINE vector data cannot be exported to ETAK format.

IGES IGES files are often used to transfer CAD data between systems. IGES Version 3.0 format, published by the U.S. Department of Commerce, is in uncompressed ASCII format only.IGES files can be produced in AutoCAD using the IGESOUT command. The following IGES entities can be converted:

The ERDAS IMAGINE import process also imports line and point attribute data (if they exist) and creates an INFO directory with the appropriate ACODE and XCODE files. If an imported IGES file is exported back to IGES format, this information is also exported.

TIGER TIGER files are line network products of the U.S. Census Bureau. The Census Bureau is using the TIGER system to create and maintain a digital cartographic database that covers the United States, Puerto Rico, Guam, the Virgin Islands, American Samoa, and the Trust Territories of the Pacific.

Table 32: Conversion of IGES Entities

IGES Entity ERDAS IMAGINE Feature

IGES Entity 100 (Circular Arc Entities) Lines

IGES Entity 106 (Copious Data Entities) Lines

IGES Entity 106 (Line Entities) Lines

IGES Entity 116 (Point Entities) Points

Page 173: ERDAS Field Guide

Raster and Vector Data Sources 143

TIGER/Line is the line network product of the TIGER system. The cartographic base is taken from Geographic Base File/Dual Independent Map Encoding (GBF/DIME), where available, and from the USGS 1:100,000-scale national map series, SPOT imagery, and a variety of other sources in all other areas, in order to have continuous coverage for the entire United States. In addition to line segments, TIGER files contain census geographic codes and, in metropolitan areas, address ranges for the left and right sides of each segment. TIGER files are available in ASCII format on both CD-ROM and tape media. All released versions after April 1989 are supported.There is a great deal of attribute information provided with TIGER/Line files. Line and point attribute information can be converted into ERDAS IMAGINE format. The ERDAS IMAGINE import process creates an INFO directory with the appropriate ACODE and XCODE files. If an imported TIGER file is exported back to TIGER format, this information is also exported.TIGER attributes include the following:

• Version numbers—TIGER/Line file version number.

• Permanent record numbers—each line segment is assigned a permanent record number that is maintained throughout all versions of TIGER/Line files.

• Source codes—each line and landmark point feature is assigned a code to specify the original source.

• Census feature class codes—line segments representing physical features are coded based on the USGS classification codes in DLG-3 files.

• Street attributes—includes street address information for selected urban areas.

• Legal and statistical area attributes—legal areas include states, counties, townships, towns, incorporated cities, Indian reservations, and national parks. Statistical areas are areas used during the census-taking, where legal areas are not adequate for reporting statistics.

• Political boundaries—the election precincts or voting districts may contain a variety of areas, including wards, legislative districts, and election districts.

• Landmarks—landmark area and point features include schools, military installations, airports, hospitals, mountain peaks, campgrounds, rivers, and lakes.

Page 174: ERDAS Field Guide

144 Raster and Vector Data Sources

TIGER files for major metropolitan areas outside of the United States (for example, Puerto Rico, Guam) do not have address ranges.

Disk Space RequirementsTIGER/Line files are partitioned into counties ranging in size from less than a megabyte to almost 120 megabytes. The average size is approximately 10 megabytes. To determine the amount of disk space required to convert a set of TIGER/Line files, use this rule: the size of the converted layers is approximately the same size as the files used in the conversion. The amount of additional scratch space needed depends on the largest file and whether it needs to be sorted. The amount usually required is about double the size of the file being sorted.

The information presented in this section, Vector Data from Other Software Vendors on page 138, was obtained from the Data Conversion and the 6.0 ARC Command References manuals, both published by ESRI, Inc., 1992.

Page 175: ERDAS Field Guide

145Image Display

Image Display 145

Image Display

Introduction This section defines some important terms that are relevant to image display. Most of the terminology and definitions used in this chapter are based on the X Window System (Massachusetts Institute of Technology) terminology. This may differ from other systems, such as Microsoft Windows NT.A seat is a combination of an X-server and a host workstation.

• A host workstation consists of a CPU, keyboard, mouse, and a display.

• A display may consist of multiple screens. These screens work together, making it possible to move the mouse from one screen to the next.

• The display hardware contains the memory that is used to produce the image. This hardware determines which types of displays are available (for example, true color or pseudo color) and the pixel depth (for example, 8-bit or 24-bit).

Figure 35: Example of One Seat with One Display and Two Screens

Display Memory Size The size of memory varies for different displays. It is expressed in terms of:

• display resolution, which is expressed as the horizontal and vertical dimensions of memory—the number of pixels that can be viewed on the display screen. Some typical display resolutions are 1152 × 900, 1280× 1024, and 1024 × 780. For the PC, typical resolutions are 800 × 600, 1024 × 768, 1280 × 1024, 1680 x 1050 and

• the number of bits for each pixel or pixel depth, as explained below.

Screen Screen

Page 176: ERDAS Field Guide

146 Image Display

Bits for Image PlaneA bit is a binary digit, meaning a number that can have two possible values—0 and 1, or off and on. A set of bits, however, can have many more values, depending upon the number of bits used. The number of values that can be expressed by a set of bits is 2 to the power of the number of bits used. For example, the number of values that can be expressed by 3 bits is 8 (23 = 8). Displays are referred to in terms of a number of bits, such as 8-bit or 24-bit. These bits are used to determine the number of possible brightness values. For example, in a 24-bit display, 24 bits per pixel breaks down to eight bits for each of the three color guns per pixel. The number of possible values that can be expressed by eight bits is 28, or 256. Therefore, on a 24-bit display, each color gun of a pixel can have any one of 256 possible brightness values, expressed by the range of values 0 to 255. The combination of the three color guns, each with 256 possible brightness values, yields 2563, (or 224, for the 24-bit image display), or 16,777,216 possible colors for each pixel on a 24-bit display. If the display being used is not 24-bit, the example above calculates the number of possible brightness values and colors that can be displayed.

Pixel The term pixel is abbreviated from picture element. As an element, a pixel is the smallest part of a digital picture (image). Raster image data are divided by a grid, in which each cell of the grid is represented by a pixel. A pixel is also called a grid cell. Pixel is a broad term that is used for both:

• the data file value(s) for one data unit in an image (file pixels), or

• one grid location on a display or printout (display pixels).

Usually, one pixel in a file corresponds to one pixel in a display or printout. However, an image can be magnified or reduced so that one file pixel no longer corresponds to one pixel in the display or printout. For example, if an image is displayed with a magnification factor of 2, then one file pixel takes up 4 (2 × 2) grid cells on the display screen. To display an image, a file pixel that consists of one or more numbers must be transformed into a display pixel with properties that can be seen, such as brightness and color. Whereas the file pixel has values that are relevant to data (such as wavelength of reflected light), the displayed pixel must have a particular color or gray level that represents these data file values.

Page 177: ERDAS Field Guide

Image Display 147

Colors Human perception of color comes from the relative amounts of red, green, and blue light that are measured by the cones (sensors) in the eye. Red, green, and blue light can be added together to produce a wide variety of colors—a wider variety than can be formed from the combinations of any three other colors. Red, green, and blue are therefore the additive primary colors. A nearly infinite number of shades can be produced when red, green, and blue light are combined. On a display, different colors (combinations of red, green, and blue) allow you to perceive changes across an image. Color displays that are available currently yield 224, or 16,777,216 colors. Each color has a possible 256 different values (28).

Color GunsOn a display, color guns direct electron beams that fall on red, green, and blue phosphors. The phosphors glow at certain frequencies to produce different colors. Color monitors are often called RGB monitors, referring to the primary colors. The red, green, and blue phosphors on the picture tube appear as tiny colored dots on the display screen. The human eye integrates these dots together, and combinations of red, green, and blue are perceived. Each pixel is represented by an equal number of red, green, and blue phosphors.

Brightness ValuesBrightness values (or intensity values) are the quantities of each primary color to be output to each displayed pixel. When an image is displayed, brightness values are calculated for all three color guns, for every pixel. All of the colors that can be output to a display can be expressed with three brightness values—one for each color gun.

Colormap and Colorcells A color on the screen is created by a combination of red, green, and blue values, where each of these components is represented as an 8-bit value. Therefore, 24 bits are needed to represent a color. Since many systems have only an 8-bit display, a colormap is used to translate the 8-bit value into a color. A colormap is an ordered set of colorcells, which is used to perform a function on a set of input values. To display or print an image, the colormap translates data file values in memory into brightness values for each color gun. Colormaps are not limited to 8-bit displays.

Page 178: ERDAS Field Guide

148 Image Display

Colormap vs. Lookup TableThe colormap is a function of the display hardware, whereas a lookup table is a function of ERDAS IMAGINE. When a contrast adjustment is performed on an image in ERDAS IMAGINE, lookup tables are used. However, if the auto-update function is being used to view the adjustments in near real-time, then the colormap is being used to map the image through the lookup table. This process allows the colors on the screen to be updated in near real-time. This chapter explains how the colormap is used to display imagery.

ColorcellsThere is a colorcell in the colormap for each data file value. The red, green, and blue values assigned to the colorcell control the brightness of the color guns for the displayed pixel (Nye 1990). The number of colorcells in a colormap is determined by the number of bits in the display (for example, 8-bit, 24-bit).For example, if a pixel with a data file value of 40 was assigned a display value (colorcell value) of 24, then this pixel uses the brightness values for the 24th colorcell in the colormap. In the colormap below (Table 33), this pixel is displayed as blue.

The colormap is controlled by the X Windows system. There are 256 colorcells in a colormap with an 8-bit display. This means that 256 colors can be displayed simultaneously on the display. With a 24-bit display, there are 256 colorcells for each color: red, green, and blue. This offers 256 × 256 × 256, or 16,777,216 different colors. When an application requests a color, the server specifies which colorcell contains that color and returns the color. Colorcells can be read-only or read/write.

Table 33: Colorcell Example

Colorcell Index Red Green Blue

1 255 0 0

2 0 170 90

3 0 0 255

24 0 0 255

Page 179: ERDAS Field Guide

Image Display 149

Read-only ColorcellsThe color assigned to a read-only colorcell can be shared by other application windows, but it cannot be changed once it is set. To change the color of a pixel on the display, it would not be possible to change the color for the corresponding colorcell. Instead, the pixel value would have to be changed and the image redisplayed. For this reason, it is not possible to use auto-update operations in ERDAS IMAGINE with read-only colorcells.

Read/Write ColorcellsThe color assigned to a read/write colorcell can be changed, but it cannot be shared by other application windows. An application can easily change the color of displayed pixels by changing the color for the colorcell that corresponds to the pixel value. This allows applications to use auto update operations. However, this colorcell cannot be shared by other application windows, and all of the colorcells in the colormap could quickly be utilized.

Changeable ColormapsSome colormaps can have both read-only and read/write colorcells. This type of colormap allows applications to utilize the type of colorcell that would be most preferred.

Display Types The possible range of different colors is determined by the display type. ERDAS IMAGINE supports the following types of displays:

• 8-bit PseudoColor

• 15-bit HiColor (for Windows NT)

• 24-bit DirectColor

• 24-bit TrueColor

The above display types are explained in more detail below.

A display may offer more than one visual type and pixel depth. See the ERDAS IMAGINE Configuration Guide for more information on specific display hardware.

32-bit DisplaysA 32-bit display is a combination of an 8-bit PseudoColor and 24-bit DirectColor, or TrueColor display. Whether or not it is DirectColor or TrueColor depends on the display hardware.

Page 180: ERDAS Field Guide

150 Image Display

8-bit PseudoColor An 8-bit PseudoColor display has a colormap with 256 colorcells. Each cell has a red, green, and blue brightness value, giving 256 combinations of red, green, and blue. The data file value for the pixel is transformed into a colorcell value. The brightness values for the colorcell that is specified by this colorcell value are used to define the color to be displayed.

Figure 36: Transforming Data File Values to a Colorcell Value

In Figure 36, data file values for a pixel of three continuous raster layers (bands) is transformed to a colorcell value. Since the colorcell value is four, the pixel is displayed with the brightness values of the fourth colorcell (blue).This display grants a small number of colors to ERDAS IMAGINE. It works well with thematic raster layers containing less than 200 colors and with gray scale continuous raster layers. For image files with three continuous raster layers (bands), the colors are severely limited because, under ideal conditions, 256 colors are available on an 8-bit display, while 8-bit, 3-band image files can contain over 16,000,000 different colors.

Auto Update An 8-bit PseudoColor display has read-only and read/write colorcells, allowing ERDAS IMAGINE to perform near real-time color modifications using Auto Update and Auto Apply options.

24-bit DirectColor A 24-bit DirectColor display enables you to view up to three bands of data at one time, creating displayed pixels that represent the relationships between the bands by their colors. Since this is a 24-bit display, it offers up to 256 shades of red, 256 shades of green, and 256 shades of blue, which is approximately 16 million different colors (2563). The data file values for each band are transformed into colorcell values. The colorcell that is specified by these values is used to define the color to be displayed.

Colorcell Index

RedValue

Green Value

BlueValue

1

2

3

4 0 0 255

5

6

Red bandvalue

Green bandvalue

Blue bandvalue

Colorcell value

(4)

Data File ValuesColormap

Blue pixel

Page 181: ERDAS Field Guide

Image Display 151

Figure 37: Transforming Data File Values to a Colorcell Value

In Figure 37, data file values for a pixel of three continuous raster layers (bands) are transformed to separate colorcell values for each band. Since the colorcell value is 1 for the red band, 2 for the green band, and 6 for the blue band, the RGB brightness values are 0, 90, 200. This displays the pixel as a blue-green color. This type of display grants a very large number of colors to ERDAS IMAGINE and it works well with all types of data.

Auto Update A 24-bit DirectColor display has read-only and read/write colorcells, allowing ERDAS IMAGINE to perform real-time color modifications using the Auto Update and Auto Apply options.

24-bit TrueColor A 24-bit TrueColor display enables you to view up to three continuous raster layers (bands) of data at one time, creating displayed pixels that represent the relationships between the bands by their colors. The data file values for the pixels are transformed into screen values and the colors are based on these values. Therefore, the color for the pixel is calculated without querying the server and the colormap. The colormap for a 24-bit TrueColor display is not available for ERDAS IMAGINE applications. Once a color is assigned to a screen value, it cannot be changed, but the color can be shared by other applications.

Red band value

Green band value

Blue band value

Data File Values

Red band value

Green band value

Blue band value

Colorcell Values

(1)

(2)

(6)

Blue-green pixel (0, 90, 200 RGB)

Colormap

Color- Cell Index

1

23

4

56

Blue Value

0

0

200

Color- Cell Index

Green Value

Red Value

Color- Cell Index

1

2

3

4

5

6

0

90

55

1

2

3

4

5

6

0

0

55

Page 182: ERDAS Field Guide

152 Image Display

The screen values are used as the brightness values for the red, green, and blue color guns. Since this is a 24-bit display, it offers 256 shades of red, 256 shades of green, and 256 shades of blue, which is approximately 16 million different colors (2563).

Figure 38: Transforming Data File Values to Screen Values

In Figure 38, data file values for a pixel of three continuous raster layers (bands) are transformed to separate screen values for each band. Since the screen value is 0 for the red band, 90 for the green band, and 200 for the blue band, the RGB brightness values are 0, 90, and 200. This displays the pixel as a blue-green color.

Auto UpdateThe 24-bit TrueColor display does not use the colormap in ERDAS IMAGINE, and thus does not provide ERDAS IMAGINE with any real-time color changing capability. Each time a color is changed, the screen values must be calculated and the image must be redrawn.

Color QualityThe 24-bit TrueColor visual provides the best color quality possible with standard equipment. There is no color degradation under any circumstances with this display.

PC Displays ERDAS IMAGINE for Microsoft Windows NT supports the following visual type and pixel depths:

• 8-bit PseudoColor

• 15-bit HiColor

• 24-bit TrueColor

Red band value

Green band value

Blue band value

Data File Values

Red band value

Green band value

Blue band value

Screen Values

(0)

(90)

(200)

Blue-green pixel (0, 90, 200 RGB)

Page 183: ERDAS Field Guide

Image Display 153

8-bit PseudoColorAn 8-bit PseudoColor display for the PC uses the same type of colormap as the X Windows 8-bit PseudoColor display, except that each colorcell has a range of 0 to 63 on most video display adapters, instead of 0 to 255. Therefore, each colorcell has a red, green, and blue brightness value, giving 64 different combinations of red, green, and blue. The colormap, however, is the same as the X Windows 8-bit PseudoColor display. It has 256 colorcells allowing 256 different colors to be displayed simultaneously.

15-bit HiColorA 15-bit HiColor display for the PC assigns colors the same way as the X Windows 24-bit TrueColor display, except that it offers 32 shades of red, 32 shades of green, and 32 shades of blue, for a total of 32,768 possible color combinations. Some video display adapters allocate 6 bits to the green color gun, allowing 64,000 colors. These adapters use a 16-bit color scheme.

24-bit TrueColorA 24-bit TrueColor display for the PC assigns colors the same way as the X Windows 24-bit TrueColor display.

Displaying Raster Layers

Image files (.img) are raster files in the ERDAS IMAGINE format. There are two types of raster layers:

• continuous

• thematic

Thematic raster layers require a different display process than continuous raster layers. This section explains how each raster layer type is displayed.

Continuous Raster Layers

An image file (.img) can contain several continuous raster layers; therefore, each pixel can have multiple data file values. When displaying an image file with continuous raster layers, it is possible to assign which layers (bands) are to be displayed with each of the three color guns. The data file values in each layer are input to the assigned color gun. The most useful color assignments are those that allow for an easy interpretation of the displayed image. For example:

• A natural-color image approximates the colors that would appear to a human observer of the scene.

• A color-infrared image shows the scene as it would appear on color-infrared film, which is familiar to many analysts.

Page 184: ERDAS Field Guide

154 Image Display

Band assignments are often expressed in R,G,B order. For example, the assignment 4, 2, 1 means that band 4 is assigned to red, band 2 to green, and band 1 to blue. Below are some widely used band to color gun assignments (Faust, 1989):

• Landsat TM—natural color: 3, 2, 1This is natural color because band 3 is red and is assigned to the red color gun, band 2 is green and is assigned to the green color gun, and band 1 is blue and is assigned to the blue color gun.

• Landsat TM—color-infrared: 4, 3, 2This is infrared because band 4 = infrared.

• SPOT Multispectral—color-infrared: 3, 2, 1This is infrared because band 3 = infrared.

Contrast TableWhen an image is displayed, ERDAS IMAGINE automatically creates a contrast table for continuous raster layers. The red, green, and blue brightness values for each band are stored in this table. Since the data file values in continuous raster layers are quantitative and related, the brightness values in the colormap are also quantitative and related. The screen pixels represent the relationships between the values of the file pixels by their colors. For example, a screen pixel that is bright red has a high brightness value in the red color gun, and a high data file value in the layer assigned to red, relative to other data file values in that layer. The brightness values often differ from the data file values, but they usually remain in the same order of lowest to highest. Some meaningful relationships between the values are usually maintained.

Contrast StretchDifferent displays have different ranges of possible brightness values. The range of most displays is 0 to 255 for each color gun. Since the data file values in a continuous raster layer often represent raw data (such as elevation or an amount of reflected light), the range of data file values is often not the same as the range of brightness values of the display. Therefore, a contrast stretch is usually performed, which stretches the range of the values to fit the range of the display. For example, Figure 39 shows a layer that has data file values from 30 to 40. When these values are used as brightness values, the contrast of the displayed image is poor. A contrast stretch simply stretches the range between the lower and higher data file values, so that the contrast of the displayed image is higher—that is, lower data file values are displayed with the lowest brightness values, and higher data file values are displayed with the highest brightness values.

Page 185: ERDAS Field Guide

Image Display 155

The colormap stretches the range of colorcell values from 30 to 40, to the range 0 to 255. Because the output values are incremented at regular intervals, this stretch is a linear contrast stretch. (The numbers in Figure 39 are approximations and do not show an exact linear relationship.)

Figure 39: Contrast Stretch and Colorcell Values

See Enhancement for more information about contrast stretching. Contrast stretching is performed the same way for display purposes as it is for permanent image enhancement.

A contrast stretch based on Percentage LUT with a clip of 2.5% from left and 1.0% from right end of the histogram is applied to stretch pixel values of all image files from 0 to 255 before they are displayed in the Viewer, unless a saved contrast stretch exists (the file is not changed). This often improves the initial appearance of the data in the Viewer.

Statistics FilesTo perform a contrast stretch, certain statistics are necessary, such as the mean and the standard deviation of the data file values in each layer.

Use the Image Information utility to create and view statistics for a raster layer.

input colorcell values

outp

ut b

right

ness

val

ues

255

25500

30 to 40 range

30→0

31→25

32→51

33→76

34→102

35→127

36→153

37→178

38→204

39→229

40→255

Page 186: ERDAS Field Guide

156 Image Display

Usually, not all of the data file values are used in the contrast stretch calculations. The minimum and maximum data file values of each band are often too extreme to produce good results. When the minimum and maximum are extreme in relation to the rest of the data, then the majority of data file values are not stretched across a very wide range, and the displayed image has low contrast.

Figure 40: Stretching by Min/Max vs. Standard Deviation

The mean and standard deviation of the data file values for each band are used to locate the majority of the data file values. The number of standard deviations above and below the mean can be entered, which determines the range of data used in the stretch.

See "Math Topics" on page 697 for more information on mean and standard deviation.

Use the Contrast Tools dialog, which is accessible from the Lookup Table Modification dialog, to enter the number of standard deviations to be used in the contrast stretch.

24-bit DirectColor and TrueColor DisplaysFigure 41 illustrates the general process of displaying three continuous raster layers on a 24-bit DirectColor display. The process is similar on a TrueColor display except that the colormap is not used.

0 255-2σ mean +2σ

stretched data file values

frequ

ency

most of the data

Standard Deviation Stretch

values stretchedover 255 are not displayed

values stretchedless than 0 are

not displayed

Original Histogram

0 255-2σ mean +2σstored data file values

frequ

ency

0 255-2σ mean +2σstretched data file values

frequ

ency

most of the data

Min/Max Stretch

Page 187: ERDAS Field Guide

Image Display 157

Figure 41: Continuous Raster Layer Display Process

8-bit PseudoColor DisplayWhen displaying continuous raster layers on an 8-bit PseudoColor display, the data file values from the red, green, and blue bands are combined and transformed to a colorcell value in the colormap. This colorcell then provides the red, green, and blue brightness values. Since there are only 256 colors available, a continuous raster layer looks different when it is displayed in an 8-bit display than a 24-bit display that offers 16 million different colors. However, the Viewer performs dithering with the available colors in the colormap to let a smaller set of colors appear to be a larger set of colors.

See Dithering on page 166 for more information.

Band-to- color gun assignments:

Histograms

Ranges of data file values to be displayed:

Colormap:

Color guns:

Brightness values in each color gun:

Color display:

Band 3 assigned to

RED

Band 2 assigned to

GREEN

Band 1 assigned to

BLUE

data file values in

of each band:

brightness values out brightness values out brightness values out

0 255 0 255 0 255

data file values in data file values in

0 255 0 255 0 255

0 255 0 255 0 255

Page 188: ERDAS Field Guide

158 Image Display

Thematic Raster Layers A thematic raster layer generally contains pixels that have been classified, or put into distinct categories. Each data file value is a class value, which is simply a number for a particular category. A thematic raster layer is stored in an image (.img) file. Only one data file value—the class value—is stored for each pixel. Since these class values are not necessarily related, the gradations that are possible in true color mode are not usually useful in pseudo color. The class system gives the thematic layer a discrete look, in which each class can have its own color.

Color TableWhen a thematic raster layer is displayed, ERDAS IMAGINE automatically creates a color table. The red, green, and blue brightness values for each class are stored in this table.

RGB ColorsIndividual color schemes can be created by combining red, green, and blue in different combinations, and assigning colors to the classes of a thematic layer. Colors can be expressed numerically, as the brightness values for each color gun. Brightness values of a display generally range from 0 to 255, however, ERDAS IMAGINE translates the values from 0 to 1. The maximum brightness value for the display device is scaled to 1. The colors listed in Table 34 are based on the range that is used to assign brightness values in ERDAS IMAGINE.

Page 189: ERDAS Field Guide

Image Display 159

Table 34 contains only a partial listing of commonly used colors. Over 16 million colors are possible on a 24-bit display.

NOTE: Black is the absence of all color (0,0,0) and white is created from the highest values of all three colors (1, 1, 1). To lighten a color, increase all three brightness values. To darken a color, decrease all three brightness values.

Use the Raster Attribute Editor to create your own color scheme.

24-bit DirectColor and TrueColor DisplaysFigure 42 illustrates the general process of displaying thematic raster layers on a 24-bit DirectColor display. The process is similar on a TrueColor display except that the colormap is not used.

Table 34: Commonly Used RGB Colors

Color Red Green Blue

Red 1 0 0

Red-Orange 1 .392 0

Orange .608 .588 0

Yellow 1 1 0

Yellow-Green .490 1 0

Green 0 1 0

Cyan 0 1 1

Blue 0 0 1

Blue-Violet .392 0 .471

Violet .588 0 .588

Black 0 0 0

White 1 1 1

Gray .498 .498 .498

Brown .373 .227 0

Page 190: ERDAS Field Guide

160 Image Display

Figure 42: Thematic Raster Layer Display Process

Display a thematic raster layer from the Viewer.

8-bit PseudoColor DisplayThe colormap is a limited resource that is shared among all of the applications that are running concurrently. Because of the limited resources, ERDAS IMAGINE does not typically have access to the entire colormap.

Using the Viewer The Viewer is a window for displaying raster, vector, and annotation layers. In IMAGINE ribbon Workspace, the Viewer types are:

1 2 3

4 3 5

2 1 4

Colormap:

Color guns:

Brightness values in each color gun:

Display:

class values in

brightness values out brightness values out brightness values out

class values in class values in1 2 3 4 51 2 3 4 5 1 2 3 4 5

0 128 255 0 128 255 0 128 255

R O Y

V Y G

O R V

= 255= 128= 0

Original image by class:

Colorscheme:

CLASS12345

COLORRed =Orange =Yellow =Violet =Green =

RED2552552551280

GREEN01282550255

BLUE0002550

Brightness Values

Page 191: ERDAS Field Guide

Image Display 161

• 2D View displays raster, vector, and annotation data in a 2-dimensional view window.

• 3D View renders 3-dimensional DEMs, raster overlays, vector layers, and annotation feature layers.

• Map View is designed to create maps and presentation graphics.

You can open as many Viewer windows as their window manager supports.

NOTE: The more Viewers that are opened simultaneously, the more RAM memory is necessary.

The Viewer not only makes digital images visible quickly, but it can also be used as a tool for image processing and raster GIS modeling. The uses of the Viewer are listed briefly in this section, and described in greater detail in other chapters of the ERDAS Field Guide.

ColormapERDAS IMAGINE does not use the entire colormap because there are other applications that also need to use it, including the window manager, terminal windows, Arc View, or a clock. Therefore, there are some limitations to the number of colors that the Viewer can display simultaneously, and flickering may occur as well.

Color FlickeringIf an application requests a new color that does not exist in the colormap, the server assigns that color to an empty colorcell. However, if there are not any available colorcells and the application requires a private colorcell, then a private colormap is created for the application window. Since this is a private colormap, when the cursor is moved out of the window, the server uses the main colormap and the brightness values assigned to the colorcells. Therefore, the colors in the private colormap are not applied and the screen flickers. Once the cursor is moved into the application window, the correct colors are applied for that window.

ResamplingWhen a raster layer(s) is displayed, the file pixels may be resampled for display on the screen. Resampling is used to calculate pixel values when one raster grid must be fitted to another. In this case, the raster grid defined by the file must be fit to the grid of screen pixels in the Viewer.

Page 192: ERDAS Field Guide

162 Image Display

All Viewer operations are file-based. So, any time an image is resampled in the Viewer, the Viewer uses the file as its source. If the raster layer is magnified or reduced, the Viewer refits the file grid to the new screen grid.The resampling methods available are:

• Nearest Neighbor—uses the value of the closest pixel to assign to the output pixel value.

• Bilinear Interpolation—uses the data file values of four pixels in a 2 × 2 window to calculate an output value with a bilinear function.

• Cubic Convolution—uses the data file values of 16 pixels in a 4 × 4 window to calculate an output value with a cubic function.

These are discussed in detail in "Rectification" on page 251.

The default resampling method is Nearest Neighbor.

Preference EditorThe Preference Editor enables you to set parameters for the Viewer that affect the way the Viewer operates.

See the ERDAS IMAGINE On-Line Help for the Preference Editor for information on how to set preferences for the Viewer.

Pyramid Layers Sometimes a large image file may take a long time to display in the Viewer or to be resampled by an application. The Compute Pyramid Layer option enables you to display large images faster and allows certain applications to rapidly access the resampled data. Pyramid layers are image layers which are copies of the original layer successively reduced by the power of 2 and then resampled. Both IMAGINE native pyramid layers and LPS/Stereo Analyst pyramid layers are generated with a reduction factor of 2; however, each uses different filters and different layer options. The Compute Pyramid Layers option in IMAGINE has three options for continuous image data (raster images); 2x2, 3x3, or 4x4 kernel size filtering methods. The 3x3 kernel size is recommended for LPS and Stereo Analyst photogrammetry functions.

LPS/Stereo Analyst pyramid layers and the 3x3 kernel are discussed in Image Pyramid on page 631.

Page 193: ERDAS Field Guide

Image Display 163

A 2 x 2 kernel calculates the average of 4 pixels in a 2x2 pixel window of the higher resolution level and applies the average to one pixel for the current level of pyramid. The filter can be represented as:

The computation is simple, resulting in fast pyramid layer processing time. This kernel is suitable for image visual observation, however it can result in a high degree of smoothing or sharpening which is not necessarily desirable for digital photogrammetric processing.A 4 x 4 kernel uses 16 neighboring pixels of the higher resolution level to arrive at one pixel for the current pyramid level. The processing time for this method is much longer than the others, since the computation requires a greater number of pixel operations and is based on double precision arithmetic. This method is not recommended for multi-resolution image matching. (Wang, Y. and Yang, X. 1997)If the raster layer is thematic, then it is resampled using the Nearest Neighbor method.

See "Rectification" on page 251 for more information on Nearest Neighbor.

The number of pyramid layers created depends on the size of the original image. A larger image produces more pyramid layers. When the Compute Pyramid Layer option is selected, ERDAS IMAGINE automatically creates successively reduced layers until the final pyramid layer is as small as a block size of 64 x 64 pixels. The default block size is 512 × 512 pixels.

See Block Size in ERDAS IMAGINE .img Files On-Line Help for information on block size.

Pyramid layers are added as additional layers in the image file. However, these layers cannot be accessed for display. The file size is increased by approximately one-third when pyramid layers are created. The actual increase in file size can be determined by multiplying the layer size by this formula

Where:n = number of pyramid layers

NOTE: This equation is applicable to all types of pyramid layers: internal and external.

14--- 1 1

1 1

Page 194: ERDAS Field Guide

164 Image Display

Pyramid layers do not appear as layers which can be processed: they are for viewing purposes only. Therefore, they do not appear as layers in other parts of the ERDAS IMAGINE software (for example, the Arrange Layers dialog).

The Image Files (General) section of the Preference Editor contains a preference for the Initial Pyramid Layer Number. By default, the value is set to 1. This means that all reduced pyramid layers generated are retained.

Pyramid layers can be deleted through the Image Metadata dialog. However, when pyramid layers are deleted, they are not deleted from the image file; therefore, the image file size does not change, but ERDAS IMAGINE utilizes this file space, if necessary. Pyramid layers are deleted from viewing and resampling access only - that is, they can no longer be viewed or used in an application.

14i----

i 0=

n

Page 195: ERDAS Field Guide

Image Display 165

Figure 43: Pyramid Layers

For example, a file that is 4K × 4K pixels could take a long time to display when the image is fit to the Viewer. The Compute Pyramid Layers option creates additional layers successively reduced from 4K × 4K, to 2K × 2K, 1K × 1K, 512 × 512, 128 × 128, down to 64 × 64. ERDAS IMAGINE then selects the pyramid layer size most appropriate for display in the Viewer window when the image is displayed.

The Compute Pyramid Layers option is available from the ImageInfo dialog and the Image Command Tool.

For more information about the .img format, see "Raster Data" on page 1 and ERDAS IMAGINE .img Files On-Line Help.

External Pyramid LayersPyramid layers can be either internal or external. If you choose external pyramid layers, they are stored with the same name in the same directory as the image with which they are associated, but with the .rrd extension. For example, an image named tm_image1.img has external pyramid layers contained in a file named tm_image1.rrd.

Original Image

Pyramid layer (2K × 2K)

Pyramid layer (1K × 1K)

Pyramid layer (512 × 512)

Pyramid layer (128 × 128)

Pyramid layer (64 × 64)

image file

ERDAS IMAGINE selects the pyramid layer that displays the fastest in the Viewer.

(4K × 4K)

Page 196: ERDAS Field Guide

166 Image Display

The extension .rrd stands for reduced resolution data set. You can delete the external pyramid layers associated with an image by accessing the Image Information dialog. Unlike internal pyramid layers, external pyramid layers do not affect the size of the associated image. Some raster formats create internal pyramid layers by default and may not allow applications to control pyramid layers. This case will ignore your Pyramid Layer Preference settings.

Dithering A display is capable of viewing only a limited number of colors simultaneously. For example, an 8-bit display has a colormap with 256 colorcells, therefore, a maximum of 256 colors can be displayed at the same time. If some colors are being used for auto update color adjustment while other colors are being used for other imagery, the color quality degrades. Dithering lets a smaller set of colors appear to be a larger set of colors. If the desired display color is not available, a dithering algorithm mixes available colors to provide something that looks like the desired color. For a simple example, assume the system can display only two colors: black and white, and you want to display gray. This can be accomplished by alternating the display of black and white pixels.

Figure 44: Example of Dithering

In Figure 44, dithering is used between a black pixel and a white pixel to obtain a gray pixel.The colors that the Viewer dithers between are similar to each other, and are dithered on the pixel level. Using similar colors and dithering on the pixel level makes the image appear smooth.

Dithering allows multiple images to be displayed in different Viewers without refreshing the currently displayed image(s) each time a new image is displayed.

Black Gray White

Page 197: ERDAS Field Guide

Image Display 167

Color PatchesWhen the Viewer performs dithering, it uses patches of 2 × 2 pixels. If the desired color has an exact match, then all of the values in the patch match it. If the desired color is halfway between two of the usable colors, the patch contains two pixels of each of the surrounding usable colors. If it is 3/4 of the way between two usable colors, the patch contains 3 pixels of the color it is closest to, and 1 pixel of the color that is second closest. Figure 45 shows what the color patches would look like if the usable colors were black and white and the desired color was gray.

Figure 45: Example of Color Patches

If the desired color is not an even multiple of 1/4 of the way between two allowable colors, it is rounded to the nearest 1/4. The Viewer separately dithers the red, green, and blue components of a desired color.

Color ArtifactsSince the Viewer requires 2 × 2 pixel patches to represent a color, and actual images typically have a different color for each pixel, artifacts may appear in an image that has been dithered. Usually, the difference in color resolution is insignificant, because adjacent pixels are normally similar to each other. Similarity between adjacent pixels usually smooths out artifacts that appear.

Viewing Layers The Viewer displays layers as one of the following types of view layers:

• annotation

• vector

• pseudo color

• gray scale

• true color

Annotation View LayerWhen an annotation layer (xxx.ovr) is displayed in the Viewer, it is displayed as an annotation view layer.

White 25% Grey 50% Grey 75% Grey Black

Page 198: ERDAS Field Guide

168 Image Display

Vector View LayerA Vector layer is displayed in the Viewer as a vector view layer.

Pseudo Color View LayerWhen a raster layer is displayed as a pseudo color layer in the Viewer, the colormap uses the RGB brightness values for the one layer in the RGB table. This is most appropriate for thematic layers. If the layer is a continuous raster layer, the layer would initially appear gray, since there are not any values in the RGB table.

Gray Scale View LayerWhen a raster layer is displayed as a gray scale layer in the Viewer, the colormap uses the brightness values in the contrast table for one layer. This layer is then displayed in all three color guns, producing a gray scale image. A continuous raster layer may be displayed as a gray scale view layer.

True Color View LayerContinuous raster layers should be displayed as true color layers in the Viewer. The colormap uses the RGB brightness values for three layers in the contrast table: one for each color gun to display the set of layers.

Viewing Multiple Layers It is possible to view as many layers of all types (with the exception of vector layers, which have a limit of 10) at one time in a single Viewer. To overlay multiple layers in one Viewer, they must all be referenced to the same map coordinate system. The layers are positioned geographically within the window, and resampled to the same scale as previously displayed layers. Therefore, raster layers in one Viewer can have different cell sizes. When multiple layers are magnified or reduced, raster layers are resampled from the file to fit to the new scale.

Display multiple layers from the Viewer. Be sure to turn off the Clear Display check box when you open subsequent layers.

Overlapping LayersWhen layers overlap, the order in which the layers are opened is very important. The last layer that is opened always appears to be on top of the previously opened layers.In a raster layer, it is possible to make values of zero transparent in the Viewer, meaning that they have no opacity. Thus, if a raster layer with zeros is displayed over other layers, the areas with zero values allow the underlying layers to show through.

Page 199: ERDAS Field Guide

Image Display 169

Opacity is a measure of how opaque, or solid, a color is displayed in a raster layer. Opacity is a component of the color scheme of categorical data displayed in pseudo color.

• 100% opacity means that a color is completely opaque, and cannot be seen through.

• 50% opacity lets some color show, and lets some of the underlying layers show through. The effect is like looking at the underlying layers through a colored fog.

• 0% opacity allows underlying layers to show completely.

By manipulating opacity, you can compare two or more layers of raster data that are displayed in a Viewer. Opacity can be set at any value in the range of 0% to 100%. Use the Contents Panel (Ribbon Workspace) or Arrange Layers dialog (Classic) to restack layers in a Viewer so that they overlap in a different order, if needed.

Non-Overlapping LayersMultiple layers that are opened in the same Viewer do not have to overlap. Layers that cover distinct geographic areas can be opened in the same Viewer. The layers are automatically positioned in the Viewer window according to their map coordinates, and are positioned relative to one another geographically. The map coordinate systems for the layers must be the same.

Zoom and Roam Zooming enlarges an image on the display. When an image is zoomed, it can be roamed (scrolled) so that the desired portion of the image appears on the display screen. Any image that does not fit entirely in the Viewer can be roamed and/or zoomed. Roaming and zooming have no effect on how the image is stored in the file.The zoom ratio describes the size of the image on the screen in terms of the number of file pixels used to store the image. It is the ratio of the number of screen pixels in the X or Y dimension to the number that are used to display the corresponding file pixels. A zoom ratio greater than 1 is a magnification, which makes the image features appear larger in the Viewer. A zoom ratio less than 1 is a reduction, which makes the image features appear smaller in the Viewer.

Page 200: ERDAS Field Guide

170 Image Display

NOTE: ERDAS IMAGINE allows floating point zoom ratios, so that images can be zoomed at virtually any scale (that is, continuous fractional zoom). Resampling is necessary whenever an image is displayed with a new pixel grid. The resampling method used when an image is zoomed is the same one used when the image is displayed, as specified in the Open Raster Layer dialog. The default resampling method is Nearest Neighbor.

Zoom the data in the Viewer by scrolling the mouse wheel, or using zoom options in the Home tab or the Quick View right-button menu.

Geographic Information To prepare to run many programs, it may be necessary to determine the data file coordinates, map coordinates, or data file values for a particular pixel or a group of pixels. By displaying the image in the Viewer and then selecting the pixel(s) of interest, important information about the pixel(s) can be viewed.

The Quick View right-button menu gives you options to view information about a specific pixel. Use the Raster Attribute Editor to access information about classes in a thematic layer.

See "Geographic Information Systems" on page 173 for information about attribute data.

Enhancing Continuous Raster Layers

Working with the brightness values in the colormap is useful for image enhancement. Often, a trial and error approach is needed to produce an image that has the right contrast and highlights the right features. By using the tools in the Viewer, it is possible to quickly view the effects of different enhancement techniques, undo enhancements that are not helpful, and then save the best results to disk.

A zoom ratio of 1 means... each file pixel is displayed with 1 screen pixel in the Viewer.

A zoom ratio of 2 means... each file pixel is displayed with a block of 2 × 2 screen pixels. Effectively, the image is displayed at 200%.

A zoom ratio of 0.5 means... each block of 2 × 2 file pixels is displayed with 1 screen pixel. Effectively, the image is displayed at 50%.

Page 201: ERDAS Field Guide

Image Display 171

Use the Raster options from the Viewer to enhance continuous raster layers.

See "Enhancement" on page 455 for more information on enhancing continuous raster layers.

Creating New Image Files It is easy to create a new image file (.img) from the layer(s) displayed in the Viewer. The new image file contains three continuous raster layers (RGB), regardless of how many layers are currently displayed. The Image Information utility must be used to create statistics for the new image file before the file is enhanced.Annotation layers can be converted to raster format, and written to an image file. Or, vector data can be gridded into an image, overwriting the values of the pixels in the image plane, and incorporated into the same band as the image.

Use the Viewer to .img function to create a new image file from the currently displayed raster layers.

Page 202: ERDAS Field Guide

172 Image Display

Page 203: ERDAS Field Guide

173Geographic Information Systems

Geographic Information Systems 173

Geographic Information Systems

Introduction The dawning of GIS can legitimately be traced back to the beginning of the human race. The earliest known map dates back to 2500 B.C.E., but there were probably maps before that time. Since then, humans have been continually improving the methods of conveying spatial information. The mid-eighteenth century brought the use of map overlays to show troop movements in the Revolutionary War. This could be considered an early GIS. The first British census in 1825 led to the science of demography, another application for GIS. During the 1800s, many different cartographers and scientists were all discovering the power of overlays to convey multiple levels of information about an area (Star and Estes, 1990).Frederick Law Olmstead has long been considered the father of Landscape Architecture for his pioneering work in the early 20th century. Many of the methods Olmstead used in Landscape Architecture also involved the use of hand-drawn overlays. This type of analysis was beginning to be used for a much wider range of applications, such as change detection, urban planning, and resource management (Rado, 1992).The first system to be called a GIS was the Canadian Geographic Information System, developed in 1962 by Roger Tomlinson of the Canada Land Inventory. Unlike earlier systems that were developed for a specific application, this system was designed to store digitized map data and land-based attributes in an easily accessible format for all of Canada. This system is still in operation today (Parent and Church, 1987). In 1969, Ian McHarg’s influential work, Design with Nature, was published. This work on land suitability/capability analysis (SCA), a system designed to analyze many data layers to produce a plan map, discussed the use of overlays of spatially referenced data layers for resource planning and management (Star and Estes, 1990). The era of modern GIS really started in the 1970s, as analysts began to program computers to automate some of the manual processes. Software companies like ESRI and ERDAS developed software packages that could input, display, and manipulate geographic data to create new layers of information. The steady advances in features and power of the hardware over the last ten years—and the decrease in hardware costs—have made GIS technology accessible to a wide range of users. The growth rate of the GIS industry in the last several years has exceeded even the most optimistic projections.

Page 204: ERDAS Field Guide

174 Geographic Information Systems

Today, a GIS is a unique system designed to input, store, retrieve, manipulate, and analyze layers of geographic data to produce interpretable information. A GIS should also be able to create reports and maps (Marble, 1990). The GIS database may include computer images, hardcopy maps, statistical data, or any other data that is needed in a study. Although the term GIS is commonly used to describe software packages, a true GIS includes knowledgeable staff, a training program, budgets, marketing, hardware, data, and software (Walker and Miller, 1990). GIS technology can be used in almost any geography-related discipline, from Landscape Architecture to natural resource management to transportation routing.The central purpose of a GIS is to turn geographic data into useful information—the answers to real-life questions—questions such as:

• How can we monitor the influence of global climatic changes on the Earth’s resources?

• How should political districts be redrawn in a growing metropolitan area?

• Where is the best place for a shopping center that is most convenient to shoppers and least harmful to the local ecology?

• What areas should be protected to ensure the survival of endangered species?

• How can communities be better prepared to face natural disasters, such as earthquakes, tornadoes, hurricanes, and floods?

Information vs. Data Information, as opposed to data, is independently meaningful. It is relevant to a particular problem or question:

• “The land cover at coordinate N875250, E757261 has a data file value 8,” is data.

• “Land cover with a value of 8 are on slopes too steep for development,” is information.

You can input data into a GIS and output information. The information you wish to derive determines the type of data that must be input. For example, if you are looking for a suitable refuge for bald eagles, zip code data is probably not needed, while land cover data may be useful.

Page 205: ERDAS Field Guide

Geographic Information Systems 175

For this reason, the first step in any GIS project is usually an assessment of the scope and goals of the study. Once the project is defined, you can begin the process of building the database. Although software and data are commercially available, a custom database must be created for the particular project and study area. The database must be designed to meet the needs of the organization and objectives. ERDAS IMAGINE provides tools required to build and manipulate a GIS database. Successful GIS implementation typically includes two major steps:

• data input

• analysis

Data input involves collecting the necessary data layers into a GIS database. In the analysis phase, these data layers are combined and manipulated in order to create new layers and to extract meaningful information from them. This chapter discusses these steps in detail.

Data Input Acquiring the appropriate data for a project involves creating a database of layers that encompasses the study area. A database created with ERDAS IMAGINE can consist of:

• continuous layers (satellite imagery, aerial photographs, elevation data, etc.)

• thematic layers (land use, vegetation, hydrology, soils, slope, etc.)

• vector layers (streets, utility and communication lines, parcels, etc.)

• statistics (frequency of an occurrence, population demographics, etc.)

• attribute data (characteristics of roads, land, imagery, etc.)

The ERDAS IMAGINE software package employs a hierarchical, object-oriented architecture that utilizes both raster imagery and topological vector data. Raster images are stored in image files, and vector layers are coverages or shapefiles based on the ESRI ArcInfo and ArcView data models. The seamless integration of these two types of data enables you to reap the benefits of both data formats in one system.

Page 206: ERDAS Field Guide

176 Geographic Information Systems

Figure 46: Data Input

Raster data might be more appropriate in the following applications:

• site selection

• natural resource management

• petroleum exploration

• mission planning

• change detection

On the other hand, vector data may be better suited for these applications:

• urban planning

• tax assessment and planning

• traffic engineering

• facilities management

GIS analyst using ERDAS IMAGINE

Landsat TMSPOT panchromaticAerial photographSoils dataLand cover

RoadsCensus dataOwnership parcelsPolitical boundariesLandmarks

Vector Data Input Raster Data Input

Raster Attributes Vector Attributes

Page 207: ERDAS Field Guide

Geographic Information Systems 177

The advantage of an integrated raster and vector system such as ERDAS IMAGINE is that one data structure does not have to be chosen over the other. Both data formats can be used and the functions of both types of systems can be accessed. Depending upon the project, only raster or vector data may be needed, but most applications benefit from using both.

Themes and LayersA database usually consists of files with data of the same geographical area, with each file containing different types of information. For example, a database for the city recreation department might include files of all the parks in the area. These files might depict park boundaries, county and municipal boundaries, vegetation types, soil types, drainage basins, slope, roads, etc. Each of these files contains different information—each is a different theme. The concept of themes has evolved from early GISs, in which transparent overlays were created for each theme and combined (overlaid) in different ways to derive new information.A single theme may require more than a simple raster or vector file to fully describe it. In addition to the image, there may be attribute data that describe the information, a color scheme, or meaningful annotation for the image. The full collection of data that describe a certain theme is called a layer. Depending upon the goals of a project, it may be helpful to combine several themes into one layer. For example, if you want to propose a new park site, you might create one layer that shows roads, land cover, land ownership, slope, etc., and indicate through the use of colors and/or annotation which areas would be best for the new site. This one layer would then include many separate themes. Much of GIS analysis is concerned with combining individual themes into one or more layers that answer the questions driving the analysis. This chapter explores these analysis techniques.

Continuous Layers Continuous raster layers are quantitative (measuring a characteristic) and have related, continuous values. Continuous raster layers can be multiband (e.g., Landsat TM) or single band (e.g., SPOT panchromatic).Satellite images, aerial photographs, elevation data, scanned maps, and other continuous raster layers can be incorporated into a database and provide a wealth of information that is not available in thematic layers or vector layers. In fact, these layers often form the foundation of the database. Extremely accurate base maps can be created from rectified satellite images or aerial photographs. Then, all other layers that are added to the database can be registered to this base map.

Page 208: ERDAS Field Guide

178 Geographic Information Systems

Once used only for image processing, continuous data are now being incorporated into GIS databases and used in combination with thematic data to influence processing algorithms or as backdrop imagery on which to display the results of analyses. Current satellite data and aerial photographs are also effective in updating outdated vector data. The vectors can be overlaid on the raster backdrop and updated dynamically to reflect new or changed features, such as roads, utility lines, or land use. This chapter explores the many uses of continuous data in a GIS.

See "Raster Data" on page 1 for more information on continuous data.

Thematic Layers Thematic data are typically represented as single layers of information stored as image files and containing discrete classes. Classes are simply categories of pixels which represent the same condition. An example of a thematic layer is a vegetation classification with discrete classes representing coniferous forest, deciduous forest, wetlands, agriculture, urban, etc.A thematic layer is sometimes called a variable, because it represents one of many characteristics about the study area. Since thematic layers usually have only one band, they are usually displayed in pseudo color mode, where particular colors are often assigned to help visualize the information. For example, blues are usually used for water features, greens for healthy vegetation, etc.

See "Image Display" on page 145 for information on pseudo color display.

Class Numbering Systems As opposed to the data file values of continuous raster layers, which are generally multiband and statistically related, the data file values of thematic raster layers can have a nominal, ordinal, interval, or ratio relationship (Star and Estes, 1990).

• Nominal classes represent categories with no particular order. Usually, these are characteristics that are not associated with quantities (e.g., soil type or political area).

Page 209: ERDAS Field Guide

Geographic Information Systems 179

• Ordinal classes are those that have a sequence, such as poor, good, better, and best. An ordinal class numbering system is often created from a nominal system, in which classes have been ranked by some criteria. In the case of the recreation department database used in the previous example, the final layer may rank the proposed park sites according to their overall suitability.

• Interval classes also have a natural sequence, but the distance between each value is meaningful as well. This numbering system might be used for temperature data.

• Ratio classes differ from interval classes only in that ratio classes have a natural zero point, such as rainfall amounts.

The variable being analyzed, and the way that it contributes to the final product, determines the class numbering system used in the thematic layers. Layers that have one numbering system can easily be recoded to a new system. This is discussed in detail under Recoding on page 190.

Classification Thematic layers can be generated from remotely sensed data (e.g., Landsat TM, SPOT) by using the ERDAS IMAGINE Image Interpreter, Classification, and Spatial Modeler tools. A frequent and popular application is the creation of land cover classification schemes through the use of both supervised (user-assisted) and unsupervised (automatic) pattern-recognition algorithms contained within ERDAS IMAGINE. The output is a single thematic layer that represents specific classes based on the approach selected.

See "Classification" on page 545 for more information.

Vector Data Converted to Raster FormatVector layers can be converted to raster format if the raster format is more appropriate for an application. Typical vector layers, such as communication lines, streams, boundaries, and other linear features, can easily be converted to raster format within ERDAS IMAGINE for further analysis. Spatial Modeler automatically converts vector layers to raster for processing.

Use the Vector Utilities menu from the Vector icon in the ERDAS IMAGINE icon panel to convert vector layers to raster format, or use the vector layers directly in Spatial Modeler.

Page 210: ERDAS Field Guide

180 Geographic Information Systems

Other sources of raster data are discussed in "Raster and Vector Data Sources" on page 55.

Statistics Both continuous and thematic layers include statistical information. Thematic layers contain the following information:

• a histogram of the data values, which is the total number of pixels in each class

• a list of class names that correspond to class values

• a list of class values

• a color table, stored as brightness values in red, green, and blue, which make up the colors of each class when the layer is displayed

For thematic data, these statistics are called attributes and may be accompanied by many other types of information, as described in Attributes on page 181.

Use the Image Information option on the Viewer’s tool bar to generate or update statistics for image files.

See "Raster Data" on page 1 for more information about the statistics stored with continuous layers.

Vector Layers The vector layers used in ERDAS IMAGINE are based on the ArcInfo data model and consist of points, lines, and polygons. These layers are topologically complete, meaning that the spatial relationships between features are maintained. Vector layers can be used to represent transportation routes, utility corridors, communication lines, tax parcels, school zones, voting districts, landmarks, population density, etc. Vector layers can be analyzed independently or in combination with continuous and thematic raster layers.In ERDAS IMAGINE, vector layers may also be shapefiles based on the ArcView data model.Vector data can be acquired from several private and governmental agencies. Vector data can also be created in ERDAS IMAGINE by digitizing on the screen, using a digitizing tablet, or converting other data types to vector format.

Page 211: ERDAS Field Guide

Geographic Information Systems 181

See "Vector Data" on page 41 for more information on the characteristics of vector data.

Attributes Text and numerical data that are associated with the classes of a thematic layer or the features in a vector layer are called attributes. This information can take the form of character strings, integer numbers, or floating point numbers. Attributes work much like the data that are handled by database management software. You may define fields, which are categories of information about each class. A record is the set of all attribute data for one class. Each record is like an index card, containing information about one class or feature in a file of many index cards, which contain similar information for the other classes or features. Attribute information for raster layers is stored in the image file. Vector attribute information is stored in either an INFO file, dbf file, or SDE database. In both cases, there are fields that are automatically generated by the software, but more fields can be added as needed to fully describe the data. Both are viewed in CellArrays, which allow you to display and manipulate the information. However, raster and vector attributes are handled slightly differently, so a separate section on each follows.

Raster Attributes In ERDAS IMAGINE, raster attributes for image files are accessible from the Table tab > Show Attributes option, or from the Raster Attribute Editor. Both consist of a CellArray, which is similar to a table or spreadsheet that not only displays the information, but also includes options for importing, exporting, copying, editing, and other operations. Figure 47 shows the attributes for a land cover classification layer.

Figure 47: Raster Attributes for lnlandc.img

Most thematic layers contain the following attribute fields:

• Class Name

• Class Value

Page 212: ERDAS Field Guide

182 Geographic Information Systems

• Color table (red, green, and blue values)

• Opacity percentage

• Histogram (number of pixels in the file that belong to the class)

As many additional attribute fields as needed can be defined for each class.

See "Classification" on page 545 for more information about the attribute information that is automatically generated when new thematic layers are created in the classification process.

Viewing Raster AttributesSimply viewing attribute information can be a valuable analysis tool. Depending on the type of information associated with the layers of a database, processing may be further refined by comparing the attributes of several files. When both the raster layer and its associated attribute information are displayed, you can select features in one using the other. For example, to locate the class name associated with a particular area in a displayed image, simply click in that area with the mouse and the associated row is highlighted in the Raster Attribute Editor. Attribute information is accessible in several places throughout ERDAS IMAGINE. In some cases it is read-only and in other cases it is a fully functioning editor, allowing the information to be modified.

Manipulating Raster AttributesThe applications for manipulating attributes are as varied as the applications for GIS. The attribute information in a database depends on the goals of the project. Some of the attribute editing capabilities in ERDAS IMAGINE include:

• import/export ASCII information to and from other software packages, such as spreadsheets and word processors

• cut, copy, and paste individual cells, rows, or columns to and from the same Raster Attribute Editor or among several Raster Attribute Editors

• generate reports that include all or a subset of the information in the Raster Attribute Editor

• use formulas to populate cells

• directly edit cells by entering in new information

Page 213: ERDAS Field Guide

Geographic Information Systems 183

The Raster Attribute Editor in ERDAS IMAGINE also includes a color cell column, so that class (object) colors can be viewed or changed. In addition to direct manipulation, attributes can be changed by other programs. For example, some of the Image Interpreter functions calculate statistics that are automatically added to the Raster Attribute Editor. Models that read and/or modify attribute information can also be written.

See "Enhancement" on page 455 for more information on the Image Interpreter. There is more information on GIS modeling in Graphical Modeling on page 195.

Vector Attributes Vector attributes are stored in the Vector Attributes CellArrays. You can simply view attributes or use them to:

• select features in a vector layer for further processing

• determine how vectors are symbolized

• label features

Figure 48 shows the attributes for a vector layer with polygon features.

Figure 48: Vector Attributes CellArray

See "Vector Data" on page 41 for more information about vector attributes.

Analysis

ERDAS IMAGINE Analysis Tools

In ERDAS IMAGINE, GIS analysis functions and algorithms are accessible through three main tools:

• script models created with SML

Page 214: ERDAS Field Guide

184 Geographic Information Systems

• graphical models created with Model Maker

• prepackaged functions in Image Interpreter

Spatial Modeler LanguageSML is the basis for all ERDAS IMAGINE GIS functions. It is a modeling language that enables you to create script (text) models for a variety of applications. Models may be used to create custom algorithms that best suit your data and objectives.

Model MakerModel Maker is essentially SML linked to a graphical interface. This enables you to create graphical models using a palette of easy-to-use tools. Graphical models can be run, edited, saved in libraries, or converted to script form and edited further, using SML.

NOTE: References to the Spatial Modeler in this chapter mean that the named procedure can be accomplished using both Model Maker and SML.

Image InterpreterThe Image Interpreter houses a set of common functions that were all created using either Model Maker or SML. They have been given a dialog interface to match the other processes in ERDAS IMAGINE. In most cases, these processes can be run from a single dialog. However, the actual models are also provided with the software to enable customized processing.Many of the functions described in the following sections can be accomplished using any of these tools. Model Maker is also easy to use and utilizes many of the same steps that would be performed when drawing a flow chart of an analysis. SML is intended for more advanced analyses, and has been designed using natural language commands and simple syntax rules. Some applications may require a combination of these tools.

Customizing ERDAS IMAGINE ToolsERDAS Macro Language (EML) enables you to create and add new and/or customized dialogs. If new capabilities are needed, they can be created with the IMAGINE Developers’ Toolkit™. Using these tools, a GIS that is completely customized to a specific application and its preferences can be created.

See the ERDAS IMAGINE On-Line Help for more information about EML and the IMAGINE Developers’ Toolkit.

Page 215: ERDAS Field Guide

Geographic Information Systems 185

Analysis Procedures Once the database (layers and attribute data) is assembled, the layers can be analyzed and new information extracted. Some information can be extracted simply by looking at the layers and visually comparing them to other layers. However, new information can be retrieved by combining and comparing layers using the following procedures:

• Proximity analysis—the process of categorizing and evaluating pixels based on their distances from other pixels in a specified class or classes.

• Contiguity analysis—enables you to identify regions of pixels in the same class and to filter out small regions.

• Neighborhood analysis —any image processing technique that takes surrounding pixels into consideration, such as convolution filtering and scanning. This is similar to the convolution filtering performed on continuous data. Several types of analyses can be performed, such as boundary, density, mean, sum, etc.

• Recoding—enables you to assign new class values to all or a subset of the classes in a layer.

• Overlaying—creates a new file with either the maximum or minimum value of the input layers.

• Indexing—adds the values of the input layers.

• Matrix analysis—outputs the coincidence values of the input layers.

• Graphical modeling—enables you to combine data layers in an unlimited number of ways. For example, an output layer created from modeling can represent the desired combination of themes from many input layers.

• Script modeling—offers all of the capabilities of graphical modeling with the ability to perform more complex functions, such as conditional looping.

Using an Area of InterestAny of these functions can be performed on a single layer or multiple layers. You can also select a particular AOI that is defined in a separate file (AOI layer, thematic raster layer, or vector layer) or an AOI that is selected immediately preceding the operation by entering specific coordinates or by selecting the area in a Viewer.

Page 216: ERDAS Field Guide

186 Geographic Information Systems

Proximity Analysis Many applications require some measurement of distance or proximity. For example, a real estate developer would be concerned with the distance between a potential site for a shopping center and an interchange to a major highway. Proximity analysis determines which pixels of a layer are located at specified distances from pixels in a certain class or classes. A new thematic layer (image file) is created, which is categorized by the distance of each pixel from specified classes of the input layer. This new file then becomes a new layer of the database and provides a buffer zone around the specified class(es). In further analysis, it may be beneficial to weight other factors, based on whether they fall inside or outside the buffer zone. Figure 49 shows a layer containing lakes and streams and the resulting layer after a proximity analysis is run to create a buffer zone around all of the water features.

Use the Search (GIS Analysis) function in Image Interpreter or Spatial Modeler to perform a proximity analysis.

Figure 49: Proximity Analysis

Contiguity Analysis

Contiguity analysis is concerned with the ways in which pixels of a class are grouped together. Groups of contiguous pixels in the same class, called raster regions, or clumps, can be identified by their sizes and manipulated. One application of this tool would be an analysis for locating helicopter landing zones that require at least 250 contiguous pixels at 10-meter resolution.Contiguity analysis can be used to: 1) divide a large class into separate raster regions, or 2) eliminate raster regions that are too small to be considered for an application.

Original layer After proximity analysis performed

Buffer

Lake

Streams

zones

Page 217: ERDAS Field Guide

Geographic Information Systems 187

Filtering Clumps In cases where very small clumps are not useful, they can be filtered out according to their sizes. This is sometimes referred to as eliminating the salt and pepper effects, or sieving. In Figure 50, all of the small clumps in the original (clumped) layer are eliminated.

Figure 50: Contiguity Analysis

Use the Clump and Sieve (GIS Analysis) function in Image Interpreter or Spatial Modeler to perform contiguity analysis.

Neighborhood Analysis

With a process similar to the convolution filtering of continuous raster layers, thematic raster layers can also be filtered. The GIS filtering process is sometimes referred to as scanning, but is not to be confused with data capture via a digital camera. Neighborhood analysis is based on local or neighborhood characteristics of the data (Star and Estes, 1990). Every pixel is analyzed spatially, according to the pixels that surround it. The number and the location of the surrounding pixels is determined by a scanning window, which is defined by you. These operations are known as focal operations. The scanning window can be of any size in SML. In Model Maker, it has the following constraints:

• circular, with a maximum diameter of 512 pixels

• doughnut-shaped, with a maximum outer radius of 256

• rectangular, up to 512 × 512 pixels, with the option to mask-out certain pixels

Clumped layer Sieved layer

Page 218: ERDAS Field Guide

188 Geographic Information Systems

Use the Neighborhood (GIS Analysis) function in Image Interpreter or Spatial Modeler to perform neighborhood analysis. The scanning window used in Image Interpreter can be 3 × 3, 5 × 5, or 7 × 7. The scanning window in Model Maker is defined by you and can be up to 512 × 512. The scanning window in SML can be of any size.

Defining Scan AreaYou may define the area of the file to be scanned. The scanning window moves only through this area as the analysis is performed. Define the area in one or all of the following ways:

• Specify a rectangular portion of the file to scan. The output layer contains only the specified area.

• Specify an area that is defined by an existing AOI layer, an annotation overlay, or a vector layer. The area(s) within the polygon are scanned, and the other areas remain the same. The output layer is the same size as the input layer or the selected rectangular portion.

• Specify a class or classes in another thematic layer to be used as a mask. The pixels in the scanned layer that correspond to the pixels of the selected class or classes in the mask layer are scanned, while the other pixels remain the same.

Figure 51: Using a Mask

In Figure 51, class 2 in the mask layer was selected for the mask. Only the corresponding (shaded) pixels in the target layer are scanned—the other values remain unchanged.Neighborhood analysis creates a new thematic layer. There are several types of analysis that can be performed upon each window of pixels, as described below:

88222288222

26666226668

88888668334

45533344544

44454444555

55555

mask layer target layer

Page 219: ERDAS Field Guide

Geographic Information Systems 189

• Boundary—detects boundaries between classes. The output layer contains only boundary pixels. This is useful for creating boundary or edge lines from classes, such as a land/water interface.

• Density—outputs the number of pixels that have the same class value as the center (analyzed) pixel. This is also a measure of homogeneity (sameness), based upon the analyzed pixel. This is often useful in assessing vegetation crown closure.

• Diversity—outputs the number of class values that are present within the window. Diversity is also a measure of heterogeneity (difference).

• Majority—outputs the class value that represents the majority of the class values in the window. The value is defined by you. This option operates like a low-frequency filter to clean up a salt and pepper layer.

• Maximum—outputs the greatest class value within the window. This can be used to emphasize classes with the higher class values or to eliminate linear features or boundaries.

• Mean—averages the class values. If class values represent quantitative data, then this option can work like a convolution filter. This is mostly used on ordinal or interval data.

• Median—outputs the statistical median of the class values in the window. This option may be useful if class values represent quantitative data.

• Minimum—outputs the least or smallest class value within the window. The value is defined by you. This can be used to emphasize classes with the low class values.

• Minority—outputs the least common of the class values that are within the window. This option can be used to identify the least common classes. It can also be used to highlight disconnected linear features.

• Rank—outputs the number of pixels in the scan window whose value is less than the center pixel.

• Standard deviation—outputs the standard deviation of class values in the window.

• Sum—totals the class values. In a file where class values are ranked, totaling enables you to further rank pixels based on their proximity to high-ranking pixels.

Page 220: ERDAS Field Guide

190 Geographic Information Systems

Figure 52: Sum Option of Neighborhood Analysis (Image Interpreter)

In Figure 52, the Sum option of Neighborhood (Image Interpreter) is applied to a 3 × 3 window of pixels in the input layer. In the output layer, the analyzed pixel is given a value based on the total of all of the pixels in the window.

The analyzed pixel is always the center pixel of the scanning window. In this example, only the pixel in the third column and third row of the file is summed.

Recoding Class values can be recoded to new values. Recoding involves the assignment of new values to one or more classes. Recoding is used to:

• reduce the number of classes

• combine classes

• assign different class values to existing classes

When an ordinal, ratio, or interval class numbering system is used, recoding can be used to assign classes to appropriate values. Recoding is often performed to make later steps easier. For example, in creating a model that outputs good, better, and best areas, it may be beneficial to recode the input layers so all of the best classes have the highest class values.In the following example (Table 35), a land cover layer is recoded so that the most environmentally sensitive areas (Riparian and Wetlands) have higher class values.

22222

88222

66822

66682

66668

822

6482

668

Output of oneiteration of thesum operation

8 + 6 + 6 + 2 + 8 + 6 + 2 + 2 + 8 = 48

Table 35: Example of a Recoded Land Cover Layer

Value New Value Class Name

0 0 Background

Page 221: ERDAS Field Guide

Geographic Information Systems 191

Use the Recode (GIS Analysis) function in Image Interpreter or Spatial Modeler to recode layers.

Overlaying Thematic data layers can be overlaid to create a composite layer. The output layer contains either the minimum or the maximum class values of the input layers. For example, if an area was in class 5 in one layer, and in class 3 in another, and the maximum class value dominated, then the same area would be coded to class 5 in the output layer, as shown in Figure 53.

1 4 Riparian

2 1 Grassland and Scrub

3 1 Chaparral

4 4 Wetlands

5 1 Emergent Vegetation

6 1 Water

Table 35: Example of a Recoded Land Cover Layer

Value New Value Class Name

Page 222: ERDAS Field Guide

192 Geographic Information Systems

Figure 53: Overlay

The application example in Figure 53 shows the result of combining two layers—slope and land use. The slope layer is first recoded to combine all steep slopes into one value. When overlaid with the land use layer, the highest data file values (the steep slopes) dominate in the output layer.

Use the Overlay (GIS Analysis) function in Image Interpreter or Spatial Modeler to overlay layers.

Indexing Thematic layers can be indexed (added) to create a composite layer. The output layer contains the sums of the input layer values. For example, the intersection of class 3 in one layer and class 5 in another would result in class 8 in the output layer, as shown in Figure 54.

68

92

16

13

5

22

34

12

25

3

99

94

19

25

3

99

90

09

00

0

Recode

Overlay

Original Slope1-5 = flat slopes6-9 = steep slopes

Recoded Slope0 = flat slopes9 = steep slopes

Land Use1 = commercial2 = residential3 = forest4 = industrial5 = wetlands

Overlay Composite1 = commercial2 = residential3 = forest4 = industrial5 = wetlands9 = steep slopes

(Land Use masked)

53

Basic Overlay Application Example

Page 223: ERDAS Field Guide

Geographic Information Systems 193

Figure 54: Indexing

The application example in Figure 54 shows the result of indexing. In this example, you want to develop a new subdivision, and the most likely sites are where there is the best combination (highest value) of good soils, good slope, and good access. Because good slope is a more critical factor to you than good soils or good access, a weighting factor is applied to the slope layer. A weighting factor has the effect of multiplying all input values by some constant. In this example, slope is given a weight of 2.

Use the Index (GIS Analysis) function in the Image Interpreter or Spatial Modeler to index layers.

99

59

91

19

5

95

19

95

99

9

3624

1636

368

2836

16

1810

1018

182

1818

2

Soils9 = good5 = fair

Slope9 = good5 = fair

Access9 = good5 = fair1 = poor

1 = poor

1 = poor

Output values calculated

WeightingImportance×1×1×1

WeightingImportance×2×2×2

WeightingImportance×1×1×1

+

+

=

3 58

Basic Index Application Example

Page 224: ERDAS Field Guide

194 Geographic Information Systems

Matrix Analysis Matrix analysis produces a thematic layer that contains a separate class for every coincidence of classes in two layers. The output is best described with a matrix diagram.

In this diagram, the classes of the two input layers represent the rows and columns of the matrix. The output classes are assigned according to the coincidence of any two input classes.

All combinations of 0 and any other class are coded to 0, because 0 is usually the background class, representing an area that is not being studied.

Unlike overlaying or indexing, the resulting class values of a matrix operation are unique for each coincidence of two input class values. In this example, the output class value at column 1, row 3 is 11, and the output class at column 3, row 1 is 3. If these files were indexed (summed) instead of matrixed, both combinations would be coded to class 4.

Use the Matrix (GIS Analysis) function in Image Interpreter or Spatial Modeler to matrix layers.

Modeling Modeling is a powerful and flexible analysis tool. Modeling is the process of creating new layers from combining or operating upon existing layers. Modeling enables you to create a small set of layers—perhaps even a single layer—which, at a glance, contains many types of information about the study area.For example, if you want to find the best areas for a bird sanctuary, taking into account vegetation, availability of water, climate, and distance from highly developed areas, you would create a thematic layer for each of these criteria. Then, each of these layers would be input to a model. The modeling process would create one thematic layer, showing only the best areas for the sanctuary.

input layer 2 data values (columns)

0 1 2 3 4 5

input layer 1 data values (rows)

0 0 0 0 0 0 0

1 0 1 2 3 4 5

2 0 6 7 8 9 10

3 0 11 12 13 14 15

Page 225: ERDAS Field Guide

Geographic Information Systems 195

The set of procedures that define the criteria is called a model. In ERDAS IMAGINE, models can be created graphically and resemble a flow chart of steps, or they can be created using a script language. Although these two types of models look different, they are essentially the same—input files are defined, functions and/or operators are specified, and outputs are defined. The model is run and a new output layer(s) is created. Models can utilize analysis functions that have been previously defined, or new functions can be created by you.

Use the Model Maker function in Spatial Modeler to create graphical models and SML to create script models.

Data Layers In modeling, the concept of layers is especially important. Before computers were used for modeling, the most widely used approach was to overlay registered maps on paper or transparencies, with each map corresponding to a separate theme. Today, digital files replace these hardcopy layers and allow much more flexibility for recoloring, recoding, and reproducing geographical information (Steinitz et al, 1976). In a model, the corresponding pixels at the same coordinates in all input layers are addressed as if they were physically overlaid like hardcopy maps.

Graphical Modeling

Graphical modeling enables you to draw models using a palette of tools that defines inputs, functions, and outputs. This type of modeling is very similar to drawing flowcharts, in that you identify a logical flow of steps needed to perform the desired action. Through the extensive functions and operators available in the ERDAS IMAGINE graphical modeling program, you can analyze many layers of data in very few steps without creating intermediate files that occupy extra disk space. Modeling is performed using a graphical editor that eliminates the need to learn a programming language. Complex models can be developed easily and then quickly edited and re-run on another data set.

Use the Model Maker function in Spatial Modeler to create graphical models.

Image Processing and GISIn ERDAS IMAGINE, the traditional GIS functions (e.g., neighborhood analysis, proximity analysis, recode, overlay, index, etc.) can be performed in models, as well as image processing functions. Both thematic and continuous layers can be input into models that accomplish many objectives at once.

Page 226: ERDAS Field Guide

196 Geographic Information Systems

For example, suppose there is a need to assess the environmental sensitivity of an area for development. An output layer can be created that ranks most to least sensitive regions based on several factors, such as slope, land cover, and floodplain. To visualize the location of these areas, the output thematic layer can be overlaid onto a high resolution, continuous raster layer (e.g., SPOT panchromatic) that has had a convolution filter applied. All of this can be accomplished in a single model (as shown in Figure 55).

Figure 55: Graphical Model for Sensitivity Analysis

See the ERDAS IMAGINE Tour Guides manual for step-by-step instructions on creating the environmental sensitivity model in Figure 55. Descriptions of all of the graphical models delivered with ERDAS IMAGINE are available in the On-Line Help.

Model StructureA model created with Model Maker is essentially a flow chart that defines:

Page 227: ERDAS Field Guide

Geographic Information Systems 197

• the input image(s), vector(s), matrix(ces), table(s), and scalar(s) to be analyzed

• calculations, functions, or operations to be performed on the input data

• the output image(s), matrix(ces), table(s), and scalars to be created

The graphical models created in Model Maker all have the same basic structure: input, function, output. The number of inputs, functions, and outputs can vary, but the overall form remains constant. All components must be connected to one another before the model can be executed. The model on the left in Figure 56 is the most basic form. The model on the right is more complex, but it retains the same input/function/output flow.

Figure 56: Graphical Model Structure

Graphical models are stored in ASCII files with the .gmd extension. There are several sample graphical models delivered with ERDAS IMAGINE that can be used as is or edited for more customized processing.

See the On-Line Help for instructions on editing existing models.

Input

Input

Function Output

Basic Model

Input

Function

Output

FunctionInput

Output

Complex Model

Page 228: ERDAS Field Guide

198 Geographic Information Systems

Model Maker Functions The functions available in Model Maker are divided into the following categories:

Table 36: Model Maker Functions

Category Description

Analysis Includes convolution filtering, histogram matching, contrast stretch, principal components, and more.

Arithmetic Perform basic arithmetic functions including addition, subtraction, multiplication, division, factorial, and modulus.

Bitwise Use bitwise and, or, exclusive or, and not.

Boolean Perform logical functions including and, or, and not.

Color Manipulate colors to and from RGB (red, green, blue) and IHS (intensity, hue, saturation).

Conditional Run logical tests using conditional statements and either...if...or...otherwise.

Data Generation Create raster layers from map coordinates, column numbers, or row numbers. Create a matrix or table from a list of scalars.

Descriptor Read attribute information and map a raster through an attribute column.

Distance Perform distance functions, including proximity analysis.

Exponential Use exponential operators, including natural and common logarithmic, power, and square root.

Focal (Scan) Perform neighborhood analysis functions, including boundary, density, diversity, majority, mean, minority, rank, standard deviation, sum, and others.

Focal Use Opts Constraints on which pixel values to include in calculations for the Focal (Scan) function.

Focal Apply Opts Constraints on which pixel values to apply the results of calculations for the Focal (Scan) function.

Global Analyze an entire layer and output one value, such as diversity, maximum, mean, minimum, standard deviation, sum, and more.

Matrix Multiply, divide, and transpose matrices, as well as convert a matrix to a table and vice versa.

Other Includes over 20 miscellaneous functions for data type conversion, various tests, and other utilities.

Relational Includes equality, inequality, greater than, less than, greater than or equal, less than or equal, and others.

Size Measure cell X and Y size, layer width and height, number of rows and columns, etc.

Stack Statistics Perform operations over a stack of layers including diversity, majority, max, mean, median, min, minority, standard deviation, and sum.

Page 229: ERDAS Field Guide

Geographic Information Systems 199

These functions are also available for script modeling.

See the ERDAS IMAGINE Tour Guides and the On-Line SML manual for complete instructions on using Model Maker, and more detailed information about the available functions and operators.

Objects Within Model Maker, an object is an input to or output from a function. The five basic object types used in Model Maker are:

• raster

• vector

• matrix

• table

• scalar

RasterA raster object is a single layer or multilayer array of pixel data. Rasters are typically used to specify and manipulate data from image files.

VectorVector data in either a vector coverage, shapefile, or annotation layer can be read directly into the Model Maker, converted from vector to raster, then processed similarly to raster data; Model Maker cannot write to coverages, or shapefiles or annotation layers.

Statistical Includes density, diversity, majority, mean, rank, standard deviation, and more.

String Manipulate character strings.

Surface Calculate aspect and degree/percent slope and produce shaded relief.

Trigonometric Use common trigonometric functions, including sine/arcsine, cosine/arccosine, tangent/arctangent, hyperbolic arcsine, arccosine, cosine, sine, and tangent.

Zonal Perform zonal operations including summary, diversity, majority, max, mean, min, range, and standard deviation.

Table 36: Model Maker Functions

Category Description

Page 230: ERDAS Field Guide

200 Geographic Information Systems

MatrixA matrix object is a set of numbers arranged in a two-dimensional array. A matrix has a fixed number of rows and columns. Matrices may be used to store convolution kernels or the neighborhood definition used in neighborhood functions. They can also be used to store covariance matrices, eigenvector matrices, or matrices of linear combination coefficients.

TableA table object is a series of numeric values, colors, or character strings. A table has one column and a fixed number of rows. Tables are typically used to store columns from the Raster Attribute Editor or a list of values that pertains to the individual layers of a set of layers. For example, a table with four rows could be used to store the maximum value from each layer of a four layer image file. A table may consist of up to 32,767 rows. Information in the table can be attributes, calculated (e.g., histograms), or defined by you.

ScalarA scalar object is a single numeric value, color, or character string. Scalars are often used as weighting factors.The graphics used in Model Maker to represent each of these objects are shown in Figure 57.

Figure 57: Modeling Objects

Data Types The five object types described above may be any of the following data types:

• Binary—either 0 (false) or 1 (true)

• Integer—integer values from -2,147,483,648 to 2,147,483,648 (signed 32-bit integer)

• Float—floating point data (double precision)

Raster

Matrix Scalar

Table

+

+

Vector

Page 231: ERDAS Field Guide

Geographic Information Systems 201

• String—a character string (for table objects only)

Input and output data types do not have to be the same. Using SML, you can change the data type of input files before they are processed.

Output Parameters Since it is possible to have several inputs in one model, you can optionally define the working window and the pixel cell size of the output data along with the output map projection.

Working WindowRaster layers of differing areas can be input into one model. However, the image area, or working window, must be specified in order to use it in the model calculations. Either of the following options can be selected:

• Union—the model operates on the union of all input rasters. (This is the default.)

• Intersection—the model uses only the area of the rasters that is common to all input rasters.

Pixel Cell SizeInput rasters may also be of differing resolution (pixel size), so you must select the output cell size as either:

• Minimum—the minimum cell size of the input layers is used (this is the default setting).

• Maximum—the maximum cell size of the input layers is used.

• Other—specify a new cell size.

Map ProjectionThe output map projection defaults to be the same as the first input, or projection may be selected to be the same as a chosen input. The output projection may also be selected from a projection library.

Using Attributes in Models

With the criteria function in Model Maker, attribute data can be used to determine output values. The criteria function simplifies the process of creating a conditional statement. The criteria function can be used to build a table of conditions that must be satisfied to output a particular row value for an attribute (or cell value) associated with the selected raster.

Page 232: ERDAS Field Guide

202 Geographic Information Systems

The inputs to a criteria function are rasters or vectors. The columns of the criteria table represent either attributes associated with a raster layer or the layer itself, if the cell values are of direct interest. Criteria which must be met for each output column are entered in a cell in that column (e.g., >5). Multiple sets of criteria may be entered in multiple rows. The output raster contains the first row number of a set of criteria that were met for a raster cell.

ExampleFor example, consider the sample thematic layer, parks.img, that contains the following attribute information:

A simple model could create one output layer that shows only the parks in need of repairs. The following logic would therefore be coded into the model:

“If Turf Condition is not Good or Excellent, and if Path Condition is not Good or Excellent, then the output class value is 1. Otherwise, the output class value is 2.”

More than one input layer can also be used. For example, a model could be created, using the input layers parks.img and soils.img, that shows the soil types for parks with either fair or poor turf condition. Attributes can be used from every input file. The following is a slightly more complex example:If you have a land cover file and you want to create a file of pine forests larger than 10 acres, the criteria function could be used to output values only for areas that satisfy the conditions of being both pine forest and larger than 10 acres. The output file would have two classes: pine forests larger than 10 acres and background. If you want the output file to show varying sizes of pine forest, you would simply add more conditions to the criteria table.Comparisons of attributes can also be combined with mathematical and logical functions on the class values of the input file(s). With these capabilities, highly complex models can be created.

See the ERDAS IMAGINE Tour Guides or the On-Line Help for specific instructions on using the criteria function.

Class Name Histogram Acres Path Condition Turf Condition

Car Spaces

Grant Park 2456 403.45 Fair Good 127

Piedmont Park 5167 547.88 Good Fair 94

Candler Park 763 128.90 Excellent Excellent 65

Springdale Park 548 46.33 None Excellent 0

Page 233: ERDAS Field Guide

Geographic Information Systems 203

Script Modeling SML is a script language used internally by Model Maker to execute the operations specified in the graphical models that are created. SML can also be used to directly write to models you create. It includes all of the functions available in Model Maker, plus:

• conditional branching and looping

• the ability to use complex data types

Graphical models created with Model Maker can be output to a script file (text only) in SML. These scripts can then be edited with a text editor using SML syntax and rerun or saved in a library. Script models can also be written from scratch in the text editor. They are stored in ASCII .mdl files.

The Text Editor is available from the Tools menu located on the ERDAS IMAGINE menu bar and from the Model Librarian (Spatial Modeler).

In Figure 58, both the graphical and script models are shown for a tasseled cap transformation. Notice how even the annotation on the graphical model is included in the automatically generated script model. Generating script models from graphical models may aid in learning SML.

Page 234: ERDAS Field Guide

204 Geographic Information Systems

Figure 58: Graphical and Script Models For Tasseled Cap Transformation

Convert graphical models to scripts using Model Maker. Open existing script models from the Model Librarian (Spatial Modeler).

Graphical Model

Script Model

Tasseled Cap Transformation

Models

# TM Tasseled Cap Transformation# of Lake Lanier, Georgia## declarations#INTEGER RASTER n1_tm_lanier FILE OLD NEAREST NEIGHBOR "/usr/imagine/examples/tm_lanier.img";FLOAT MATRIX n2_Custom_Matrix;FLOAT RASTER n4_lntassel FILE NEW ATHEMATIC FLOAT SINGLE "/usr/imagine/examples/lntassel.img";## set cell size for the model#SET CELLSIZE MIN;## set window for the model#SET WINDOW UNION;## load matrix n2_Custom_Matrix#n2_Custom_Matrix = MATRIX(3, 7:0.331830, 0.331210, 0.551770, 0.425140, 0.480870, 0.000000, 0.252520, -0.247170, -0.162630, -0.406390, 0.854680, 0.054930, 0.000000, -0.117490, 0.139290, 0.224900, 0.403590, 0.251780, -0.701330, 0.000000, -0.457320);## function definitions#n4_lntassel = LINEARCOMB ( $n1_tm_lanier , $n2_Custom_Matrix ) ;QUIT;

Page 235: ERDAS Field Guide

Geographic Information Systems 205

Statements A script model consists primarily of one or more statements. Each statement falls into one of the following categories:

• Declaration—defines objects to be manipulated within the model

• Assignment—assigns a value to an object

• Show and View—enables you to see and interpret results from the model

• Set—defines the scope of the model or establishes default values used by the Modeler

• Macro Definition—defines substitution text associated with a macro name

• Quit—ends execution of the model

SML also includes flow control structures so that you can utilize conditional branching and looping in the models and statement block structures, which cause a set of statements to be executed as a group.

Declaration ExampleIn the script model in Figure 58, the following lines form the declaration portion of the model:

INTEGER RASTER n1_tm_lanier FILE OLD NEAREST NEIGHBOR "/usr/imagine/examples/tm_lanier.img";

FLOAT MATRIX n2_Custom_Matrix;

FLOAT RASTER n4_lntassel FILE NEW ATHEMATIC FLOAT SINGLE "/usr/imagine/examples/lntassel.img";

Set ExampleThe following set statements are used:

SET CELLSIZE MIN;

SET WINDOW UNION;

Assignment ExampleThe following assignment statements are used:

n2_Custom_Matrix = MATRIX(3, 7:

0.331830, 0.331210, 0.551770, 0.425140, 0.480870, 0.000000, 0.252520,

-0.247170, -0.162630, -0.406390, 0.854680, 0.054930, 0.000000, -0.117490,

0.139290, 0.224900, 0.403590, 0.251780, -0.701330, 0.000000, -0.457320);

n4_lntassel = LINEARCOMB ( $n1_tm_lanier , $n2_Custom_Matrix ) ;

Page 236: ERDAS Field Guide

206 Geographic Information Systems

Data Types In addition to the data types utilized by Graphical Modeling, script model objects can store data in the following data types:

• Complex—complex data (double precision)

• Color—three floating point numbers in the range of 0.0 to 1.0, representing intensity of red, green, and blue

Variables Variables are objects in the Modeler that have been associated with names using Declaration Statements. The declaration statement defines the data type and object type of the variable. The declaration may also associate a raster variable with certain layers of an image file or a table variable with an attribute table. Assignment Statements are used to set or change the value of a variable.

For script model syntax rules, descriptions of all available functions and operators, and sample models, see the On-Line SML manual.

Vector Analysis Most of the operations discussed in the previous pages of this chapter focus on raster data. However, in a complete GIS database, both raster and vector layers are present. One of the most common applications involving the combination of raster and vector data is the updating of vector layers using current raster imagery as a backdrop for vector editing. For example, if a vector database is more than one or two years old, then there are probably errors due to changes in the area (new roads, moved roads, new development, etc.). When displaying existing vector layers over a raster layer, you can dynamically update the vector layer by digitizing new or changed features on the screen.Vector layers can also be used to indicate an AOI for further processing. Assume you want to run a site suitability model on only areas designated for commercial development in the zoning ordinances. By selecting these zones in a vector polygon layer, you could restrict the model to only those areas in the raster input files.Vector layers can also be used as inputs to models. Updated or new attributes may also be written to vector layers in models.

Editing Vector Layers Editable features are polygons (as lines), lines, label points, and nodes. There can be multiple features selected with a mixture of any and all feature types. Editing operations and commands can be performed on multiple or single selections. In addition to the basic editing operations (e.g., cut, paste, copy, delete), you can also perform the following operations on the line features in multiple or single selections:

• spline—smooths or generalizes all currently selected lines using a specified grain tolerance

Page 237: ERDAS Field Guide

Geographic Information Systems 207

• generalize—weeds vertices from selected lines using a specified tolerance

• split/unsplit—makes two lines from one by adding a node or joins two lines by removing a node

• densify—adds vertices to selected lines at a tolerance you specify

• reshape (for single lines only)—enables you to move the vertices of a line

Reshaping (adding, deleting, or moving a vertex or node) can be done on a single selected line. Table 37 details general editing operations and the feature types that support each of those operations.

The Undo utility may be applied to any edits. The software stores all edits in sequential order, so that continually pressing Undo reverses the editing.

For more information on vectors, see "Raster and Vector Data Sources" on page 55.

Constructing Topology

Either the Build or Clean option can be used to construct topology. To create spatial relationships between features in a vector layer, it is necessary to create topology. After a vector layer is edited, the topology must be constructed to maintain the topological relationships between features. When topology is constructed, each feature is assigned an internal number. These numbers are then used to determine line connectivity and polygon contiguity. Once calculated, these values are recorded and stored in that layer’s associated attribute table.

You must also reconstruct the topology of vector layers imported into ERDAS IMAGINE.

Table 37: General Editing Operations and Supporting Feature Types

Add Delete Move Reshape

Points yes yes yes no

Lines yes yes yes yes

Polygons yes yes yes no

Nodes yes yes yes no

Page 238: ERDAS Field Guide

208 Geographic Information Systems

When topology is constructed, feature attribute tables are created with several automatically created fields. Different fields are stored for the different types of layers. The automatically generated fields for a line layer are:

• FNODE#—the internal node number for the beginning of a line (from-node)

• TNODE#—the internal number for the end of a line (to-node)

• LPOLY#—the internal number for the polygon to the left of the line (zero for layers containing only lines and no polygons)

• RPOLY#—the internal number for the polygon to the right of the line (zero for layers containing only lines and no polygons)

• LENGTH—length of each line, measured in layer units

• Cover#—internal line number (values assigned by ERDAS IMAGINE)

• Cover-ID—user-ID (values modified by you)

The automatically generated fields for a point or polygon layer are:

• AREA—area of each polygon, measured in layer units (zero for layers containing only points and no polygons)

• PERIMETER—length of each polygon boundary, measured in layer units (zero for layers containing only points and no polygons)

• Cover#—internal polygon number (values assigned by ERDAS IMAGINE)

• Cover-ID—user-ID (values modified by you)

Building and Cleaning Coverages

The Build option processes points, lines, and polygons, but the Clean option processes only lines and polygons. Build recognizes only existing intersections (nodes), whereas Clean creates intersections (nodes) wherever lines cross one another. The differences in these two options are summarized in Table 38 (Environmental Systems Research Institute, 1990).

Table 38: Comparison of Building and Cleaning Coverages

Capabilities Build Clean

Processes:

Polygons Yes Yes

Page 239: ERDAS Field Guide

Geographic Information Systems 209

ErrorsConstructing topology also helps to identify errors in the layer. Some of the common errors found are:

• Lines with less than two nodes

• Polygons that are not closed

• Polygons that have no label point or too many label points

• User-IDs that are not unique

Constructing typology can identify the errors mentioned above. When topology is constructed, line intersections are created, the lines that make up each polygon are identified, and a label point is associated with each polygon. Until topology is constructed, no polygons exist and lines that cross each other are not connected at a node, because there is no intersection.

Construct topology using the Vector Utilities menu from the Vector icon in the ERDAS IMAGINE icon panel.

You should not build or clean a layer that is displayed in a Viewer, nor should you try to display a layer that is being built or cleaned.

Lines Yes Yes

Points Yes No

Numbers features Yes Yes

Calculates spatial measurements

Yes Yes

Creates intersections No Yes

Processing speed Faster Slower

Table 38: Comparison of Building and Cleaning Coverages

Capabilities Build Clean

Page 240: ERDAS Field Guide

210 Geographic Information Systems

When the Build or Clean options are used to construct the topology of a vector layer, two kinds of potential node errors may be observed; pseudo nodes and dangling nodes. These are identified in the Viewer with special symbols. The default symbols used by IMAGINE are shown in Figure 59 below but may be changed in the Vector Properties dialog.Pseudo nodes occur where a single line connects with itself (an island) or where only two lines intersect. Pseudo nodes do not necessarily indicate an error or a problem. Acceptable pseudo nodes may represent an island (a spatial pseudo node) or the point where a road changes from pavement to gravel (an attribute pseudo node).A dangling node refers to the unconstructed node of a dangling line. Every line begins and ends at a node point. So if a line does not close properly, or was digitized past an intersection, it registers as a dangling node. In some cases, a dangling node may be acceptable. For example, in a street centerline map, cul-de-sacs or dead-ends are often represented by dangling nodes.In polygon layers there may be label errors—usually no label point for a polygon, or more than one label point for a polygon. In the latter case, two or more points may have been mistakenly digitized for a polygon, or it may be that a line does not intersect another line, resulting in an open polygon.

Figure 59: Layer Errors

Errors detected in a layer can be corrected by changing the tolerances set for that layer and building or cleaning again, or by editing the layer manually, then running Build or Clean.

Refer to the ERDAS IMAGINE Tour Guides manual for step-by-step instructions on editing vector layers.

Label points in one polygon (due to dangling node)

Dangling nodes

No label pointin polygonPseudo node

(island)

Page 241: ERDAS Field Guide

211Cartography

Cartography 211

Cartography

Introduction Maps and mapping are the subject of the art and science known as cartography—creating two-dimensional representations of our three-dimensional Earth. These representations were once hand-drawn with paper and pen. But now, map production is largely automated—and the final output is not always paper. The capabilities of a computer system are invaluable to map users, who often need to know much more about an area than can be reproduced on paper, no matter how large that piece of paper is or how small the annotation is. Maps stored on a computer can be queried, analyzed, and updated quickly.As the veteran GIS and image processing authority, Roger F. Tomlinson, said: “Mapped and related statistical data do form the greatest storehouse of knowledge about the condition of the living space of mankind.” With this thought in mind, it only makes sense that maps be created as accurately as possible and be as accessible as possible.In the past, map making was carried out by mapping agencies who took the analyst’s (be they surveyors, photogrammetrists, or draftsmen) information and created a map to illustrate that information. But today, in many cases, the analyst is the cartographer and can design his maps to best suit the data and the end user. This chapter defines some basic cartographic terms and explains how maps are created within the ERDAS IMAGINE environment.

Use the Map Composer to create hardcopy and softcopy maps and presentation graphics.

This chapter concentrates on the production of digital maps. See "Hardcopy Output" on page 289 for information about printing hardcopy maps.

Types of Maps A map is a graphic representation of spatial relationships on the Earth or other planets. Maps can take on many forms and sizes, depending on the intended use of the map. Maps no longer refer only to hardcopy output. In this manual, the maps discussed begin as digital files and may be printed later as desired.Some of the different types of maps are defined in Table 39.

Page 242: ERDAS Field Guide

212 Cartography

Map Purpose

Aspect A map that shows the prevailing direction that a slope faces at each pixel. Aspect maps are often color-coded to show the eight major compass directions, or any of 360 degrees.

Base A map portraying background reference information onto which other information is placed. Base maps usually show the location and extent of natural Earth surface features and permanent human-made objects. Raster imagery, orthophotos, and orthoimages are often used as base maps.

Bathymetric A map portraying the shape of a water body or reservoir using isobaths (depth contours).

Cadastral A map showing the boundaries of the subdivisions of land for purposes of describing and recording ownership or taxation.

Choropleth A map portraying properties of a surface using area symbols. Area symbols usually represent categorized classes of the mapped phenomenon.

Composite A map on which the combined information from different thematic maps is presented.

Contour A map in which lines are used to connect points of equal elevation. Lines are often spaced in increments of ten or twenty feet or meters.

Derivative A map created by altering, combining, or analyzing other maps.

Index A reference map that outlines the mapped area, identifies all of the component maps for the area if several map sheets are required, and identifies all adjacent map sheets.

Inset A map that is an enlargement of some congested area of a smaller scale map, and that is usually placed on the same sheet with the smaller scale main map.

Isarithmic A map that uses isorithms (lines connecting points of the same value for any of the characteristics used in the representation of surfaces) to represent a statistical surface. Also called an isometric map.

Isopleth A map on which isopleths (lines representing quantities that cannot exist at a point, such as population density) are used to represent some selected quantity.

Morphometric A map representing morphological features of the Earth’s surface.

Outline A map showing the limits of a specific set of mapping entities, such as counties, NTS quads, etc. Outline maps usually contain a very small number of details over the desired boundaries with their descriptive codes.

Planimetric A map showing only the horizontal position of geographic objects, without topographic features or elevation contours.

Relief Any map that appears to be, or is, three-dimensional. Also called a shaded relief map.

Page 243: ERDAS Field Guide

Cartography 213

In ERDAS IMAGINE, maps are stored as a map file with a .map extension.

Thematic Maps Thematic maps comprise a large portion of the maps that many organizations create. For this reason, this map type is explored in more detail.Thematic maps may be subdivided into two groups:

• qualitative

• quantitative

A qualitative map shows the spatial distribution or location of a kind of nominal data. For example, a map showing corn fields in the United States would be a qualitative map. It would not show how much corn is produced in each location, or production relative to the other areas.A quantitative map displays the spatial aspects of numerical data. A map showing corn production (volume) in each area would be a quantitative map. Quantitative maps show ordinal (less than/greater than) and interval/ratio (difference) scale data (Dent, 1985).

You can create thematic data layers from continuous data (aerial photography and satellite images) using the ERDAS IMAGINE classification capabilities. See "Classification" on page 545 for more information.

Slope A map that shows changes in elevation over distance. Slope maps are usually color-coded according to the steepness of the terrain at each pixel.

Thematic A map illustrating the class characterizations of a particular spatial variable (e.g., soils, land cover, hydrology, etc.)

Topographic A map depicting terrain relief.

Viewshed A map showing only those areas visible (or invisible) from a specified point(s). Also called a line-of-sight map or a visibility map.

Map Purpose

Page 244: ERDAS Field Guide

214 Cartography

Base InformationThematic maps should include a base of information so that the reader can easily relate the thematic data to the real world. This base may be as simple as an outline of counties, states, or countries, to something more complex, such as an aerial photograph or satellite image. In the past, it was difficult and expensive to produce maps that included both thematic and continuous data, but technological advances have made this easy. For example, in a thematic map showing flood plains in the Mississippi River valley, you could overlay the thematic data onto a line coverage of state borders or a satellite image of the area. The satellite image can provide more detail about the areas bordering the flood plains. This may be valuable information when planning emergency response and resource management efforts for the area. Satellite images can also provide very current information about an area, and can assist you in assessing the accuracy of a thematic image.

In ERDAS IMAGINE, you can include multiple layers in a single map composition. See Map Composition on page 246 for more information about creating maps.

Color SelectionThe colors used in thematic maps may or may not have anything to do with the class or category of information shown. Cartographers usually try to use a color scheme that highlights the primary purpose of the map. The map reader’s perception of colors also plays an important role. Most people are more sensitive to red, followed by green, yellow, blue, and purple. Although color selection is left entirely up to the map designer, some guidelines have been established (Robinson and Sale, 1969).

• When mapping interval or ordinal data, the higher ranks and greater amounts are generally represented by darker colors.

• Use blues for water.

• When mapping elevation data, start with blues for water, greens in the lowlands, ranging up through yellows and browns to reds in the higher elevations. This progression should not be used for series other than elevation.

• In temperature mapping, use red, orange, and yellow for warm temperatures and blue, green, and gray for cool temperatures.

• In land cover mapping, use yellows and tans for dryness and sparse vegetation and greens for lush vegetation.

Page 245: ERDAS Field Guide

Cartography 215

• Use browns for land forms.

Use the Raster Attributes option in the Viewer to select and modify class colors.

Annotation A map is more than just an image(s) on a background. Since a map is a form of communication, it must convey information that may not be obvious by looking at the image. Therefore, maps usually contain several annotation elements to explain the map. Annotation is any explanatory material that accompanies a map to denote graphical features on the map. This annotation may take the form of:

• scale bars

• legends

• neatlines, tick marks, and grid lines

• symbols (north arrows, etc.)

• labels (rivers, mountains, cities, etc.) and descriptive text (title, copyright, credits, production notes, etc.)

The annotation listed above is made up of single elements. The basic annotation elements in ERDAS IMAGINE include:

• rectangles (including squares)

• ellipses (including circles)

• polygons and polylines

• text

These elements can be used to create more complex annotation, such as legends, scale bars, etc. These annotation components are actually groups of the basic elements and can be ungrouped and edited like any other graphic. You can also create your own groups to form symbols that are not in the ERDAS IMAGINE symbol library. (Symbols are discussed in more detail under Symbols on page 222.)

Create annotation using the Annotation tool palette in the Viewer or in a map composition.

Page 246: ERDAS Field Guide

216 Cartography

How Annotation is StoredAn annotation layer is a set of annotation elements that is drawn in a Viewer or Map Composer window and stored in a file. Annotation that is created in a Viewer window is stored in a separate file from the other data in the Viewer. These annotation files are called overlay files (.ovr extension). Map annotation that is created in a Map Composer window is also stored in an .ovr file, which is named after the map composition. For example, the annotation for a file called lanier.map would be lanier.map.ovr.

Scale Map scale is a statement that relates distance on a map to distance on the Earth’s surface. It is perhaps the most important information on a map, since the level of detail and map accuracy are both factors of the map scale. Scale is directly related to the map extent, or the area of the Earth’s surface to be mapped. If a relatively small area is to be mapped, such as a neighborhood or subdivision, then the scale can be larger. If a large area is to be mapped, such as an entire continent, the scale must be smaller. Generally, the smaller the scale, the less detailed the map can be. As a rule, anything smaller than 1:250,000 is considered small-scale.Scale can be reported in several ways, including:

• representative fraction

• verbal statement

• scale bar

Representative FractionMap scale is often noted as a simple ratio or fraction called a representative fraction. A map in which one inch on the map equals 24,000 inches on the ground could be described as having a scale of 1:24,000 or 1/24,000. The units on both sides of the ratio must be the same.

Verbal StatementA verbal statement of scale describes the distance on the map to the distance on the ground. A verbal statement describing a scale of 1:1,000,000 is approximately 1 inch to 16 miles. The units on the map and on the ground do not have to be the same in a verbal statement. One-inch and 6-inch maps of the British Ordnance Survey are often referred to by this method (1 inch to 1 mile, 6 inches to 1 mile) (Robinson and Sale, 1969).

Page 247: ERDAS Field Guide

Cartography 217

Scale BarsA scale bar is a graphic annotation element that describes map scale. It shows the distance on paper that represents a geographical distance on the map. Maps often include more than one scale bar to indicate various measurement systems, such as kilometers and miles.

Figure 60: Sample Scale Bars

Use the Scale Bar tool in the Annotation tool palette to automatically create representative fractions and scale bars. Use the Text tool to create a verbal statement.

Common Map ScalesYou can create maps with an unlimited number of scales, however, there are some commonly used scales. Table 39 lists these scales and their equivalents (Robinson and Sale, 1969).

Kilometers

Miles1 0 1 2 3 4

1 0 1 2

Table 39: Common Map Scales

Map Scale 1/40 inch represents

1 inch represents

1 centimeter represents

1 mile is represented

by

1 kilometer is represented

by

1:2,000 4.200 ft 56.000 yd 20.000 m 31.680 in 50.00 cm

1:5,000 10.425 ft 139.000 yd 50.000 m 12.670 in 20.00 cm

1:10,000 6.952 yd 0.158 mi 0.100 km 6.340 in 10.00 cm

1:15,840 11.000 yd 0.250 mi 0.156 km 4.000 in 6.25 cm

1:20,000 13.904 yd 0.316 mi 0.200 km 3.170 in 5.00 cm

1:24,000 16.676 yd 0.379 mi 0.240 km 2.640 in 4.17 cm

1:25,000 17.380 yd 0.395 mi 0.250 km 2.530 in 4.00 cm

1:31,680 22.000 yd 0.500 mi 0.317 km 2.000 in 3.16 cm

1:50,000 34.716 yd 0.789 mi 0.500 km 1.270 in 2.00 cm

1:62,500 43.384 yd 0.986 mi 0.625 km 1.014 in 1.60 cm

1:63,360 0.025 mi 1.000 mi 0.634 km 1.000 in 1.58 cm

1:75,000 0.030 mi 1.180 mi 0.750 km 0.845 in 1.33 cm

1:80,000 0.032 mi 1.260 mi 0.800 km 0.792 in 1.25 cm

1:100,000 0.040 mi 1.580 mi 1.000 km 0.634 in 1.00 cm

Page 248: ERDAS Field Guide

218 Cartography

Table 40 shows the number of pixels per inch for selected scales and pixel sizes.

1:125,000 0.050 mi 1.970 mi 1.250 km 0.507 in 8.00 mm

1:250,000 0.099 mi 3.950 mi 2.500 km 0.253 in 4.00 mm

1:500,000 0.197 mi 7.890 mi 5.000 km 0.127 in 2.00 mm

1:1,000,000 0.395 mi 15.780 mi 10.000 km 0.063 in 1.00 mm

Table 39: Common Map Scales

Map Scale 1/40 inch represents

1 inch represents

1 centimeter represents

1 mile is represented

by

1 kilometer is represented

by

Table 40: Pixels per Inch

Pixel Size (m)

SCALE

1”=100’1:1200

1”=200’1:2400

1”=500’1:6000

1”=1000’1:12000

1”=1500’1:18000

1”=2000’1:24000

1”=4167’1:50000

1”=1 mile1:63360

1 30.49 60.96 152.40 304.80 457.20 609.60 1270.00 1609.35

2 15.24 30.48 76.20 152.40 228.60 304.80 635.00 804.67

2.5 12.13 24.38 60.96 121.92 182.88 243.84 508.00 643.74

5 6.10 12.19 30.48 60.96 91.44 121.92 254.00 321.87

10 3.05 6.10 15.24 30.48 45.72 60.96 127.00 160.93

15 2.03 4.06 10.16 20.32 30.48 40.64 84.67 107.29

20 1.52 3.05 7.62 15.24 22.86 30.48 63.50 80.47

25 1.22 2.44 6.10 12.19 18.29 24.38 50.80 64.37

30 1.02 2.03 5.08 10.16 15.240 20.32 42.33 53.64

35 .87 1.74 4.35 8.71 13.08 17.42 36.29 45.98

40 .76 1.52 3.81 7.62 11.43 15.24 31.75 40.23

45 .68 1.35 3.39 6.77 10.16 13.55 28.22 35.76

50 .61 1.22 3.05 6.10 9.14 12.19 25.40 32.19

75 .41 .81 2.03 4.06 6.10 8.13 16.93 21.46

100 .30 .61 1.52 3.05 4.57 6.10 12.70 16.09

150 .20 .41 1.02 2.03 3.05 4.06 8.47 10.73

200 .15 .30 .76 1.52 2.29 3.05 6.35 8.05

250 .12 .24 .61 1.22 1.83 2.44 5.08 6.44

300 .10 .30 .51 1.02 1.52 2.03 4.23 5.36

350 .09 .17 .44 .87 1.31 1.74 3.63 4.60

Page 249: ERDAS Field Guide

Cartography 219

Courtesy of D. Cunningham and D. Way, The Ohio State UniversityTable 41 lists the number of acres and hectares per pixel for various pixel sizes.

400 .08 .15 .38 .76 1.14 1.52 3.18 4.02

450 .07 .14 .34 .68 1.02 1.35 2.82 3.58

500 .06 .12 .30 .61 .91 1.22 2.54 3.22

600 .05 .10 .25 .51 .76 1.02 2.12 2.69

700 .04 .09 .22 .44 .65 .87 1.81 2.30

800 .04 .08 .19 .38 .57 .76 1.59 2.01

900 .03 .07 .17 .34 .51 .68 1.41 1.79

1000 .03 .06 .15 .30 .46 .61 1.27 1.61

Table 40: Pixels per Inch (Continued)

Pixel Size (m)

SCALE

1”=100’1:1200

1”=200’1:2400

1”=500’1:6000

1”=1000’1:12000

1”=1500’1:18000

1”=2000’1:24000

1”=4167’1:50000

1”=1 mile1:63360

Table 41: Acres and Hectares per Pixel

Pixel Size (m) Acres Hectares

1 0.0002 0.0001

2 0.0010 0.0004

2.5 0.0015 0.0006

5 0.0062 0.0025

10 0.0247 0.0100

15 0.0556 0.0225

20 0.0988 0.0400

25 0.1544 0.0625

30 0.2224 0.0900

35 0.3027 0.1225

40 0.3954 0.1600

45 0.5004 0.2025

50 0.6178 0.2500

75 1.3900 0.5625

100 2.4710 1.0000

150 5.5598 2.2500

Page 250: ERDAS Field Guide

220 Cartography

Courtesy of D. Cunningham and D. Way, The Ohio State University

Legends A legend is a key to the colors, symbols, and line styles that are used in a map. Legends are especially useful for maps of categorical data displayed in pseudo color, where each color represents a different feature or category. A legend can also be created for a single layer of continuous data, displayed in gray scale. Legends are likewise used to describe all unknown or unique symbols utilized. Symbols in legends should appear exactly the same size and color as they appear on the map (Robinson and Sale, 1969).

Figure 61: Sample Legend

Use the Legend tool in the Annotation tool palette to automatically create color legends. Symbol legends are not created automatically, but can be created manually.

200 9.8842 4.0000

250 15.4440 6.2500

300 22.2394 9.0000

350 30.2703 12.2500

400 39.5367 16.0000

450 50.0386 20.2500

500 61.7761 25.0000

600 88.9576 36.0000

700 121.0812 49.0000

800 158.1468 64.0000

900 200.1546 81.0000

1000 247.1044 100.0000

Table 41: Acres and Hectares per Pixel

Pixel Size (m) Acres Hectares

pasture

forest

swamp

developed

Legend

Page 251: ERDAS Field Guide

Cartography 221

Neatlines, Tick Marks, and Grid Lines

Neatlines, tick marks, and grid lines serve to provide a georeferencing system for map detail and are based on the map projection of the image shown.

• A neatline is a rectangular border around the image area of a map. It differs from the map border in that the border usually encloses the entire map, not just the image area.

• Tick marks are small lines along the edge of the image area or neatline that indicate regular intervals of distance.

• Grid lines are intersecting lines that indicate regular intervals of distance, based on a coordinate system. Usually, they are an extension of tick marks. It is often helpful to place grid lines over the image area of a map. This is becoming less common on thematic maps, but is really up to the map designer. If the grid lines help readers understand the content of the map, they should be used.

Figure 62: Sample Neatline, Tick Marks, and Grid Lines

Grid lines may also be referred to as a graticule.

Graticules are discussed in more detail in Projections on page 226.

Use the Grid/Tick tool in the Annotation tool palette to create neatlines, tick marks, and grid lines. Tick marks and grid lines can also be created over images displayed in a Viewer. See the On-Line Help for instructions.

neatline

tick marksgrid lines

Page 252: ERDAS Field Guide

222 Cartography

Symbols Since maps are a greatly reduced version of the real-world, objects cannot be depicted in their true shape or size. Therefore, a set of symbols is devised to represent real-world objects. There are two major classes of symbols:

• replicative

• abstract

Replicative symbols are designed to look like their real-world counterparts; they represent tangible objects, such as coastlines, trees, railroads, and houses. Abstract symbols usually take the form of geometric shapes, such as circles, squares, and triangles. They are traditionally used to represent amounts that vary from place to place, such as population density, amount of rainfall, etc. (Dent, 1985). Both replicative and abstract symbols are composed of one or more of the following annotation elements:

• point

• line

• area

Symbol TypesThese basic elements can be combined to create three different types of replicative symbols:

• plan—formed after the basic outline of the object it represents. For example, the symbol for a house might be a square, because most houses are rectangular.

• profile—formed like the profile of an object. Profile symbols generally represent vertical objects, such as trees, windmills, oil wells, etc.

• function—formed after the activity that a symbol represents. For example, on a map of a state park, a symbol of a tent would indicate the location of a camping area.

Figure 63: Sample SymbolsPlan Profile Function

Page 253: ERDAS Field Guide

Cartography 223

Symbols can have different sizes, colors, and patterns to indicate different meanings within a map. The use of size, color, and pattern generally shows qualitative or quantitative differences among areas marked. For example, if a circle is used to show cities and towns, larger circles would be used to show areas with higher population. A specific color could be used to indicate county seats. Since symbols are not drawn to scale, their placement is crucial to effective communication.

Use the Symbol tool in the Annotation tool palette and the symbol library to place symbols in maps.

Labels and Descriptive Text

Place names and other labels convey important information to the reader about the features on the map. Any features that help orient the reader or are important to the content of the map should be labeled. Descriptive text on a map can include the map title and subtitle, copyright information, captions, credits, production notes, or other explanatory material.

TitleThe map title usually draws attention by virtue of its size. It focuses the reader’s attention on the primary purpose of the map. The title may be omitted, however, if captions are provided outside of the image area (Dent, 1985).

CreditsMap credits (or source information) can include the data source and acquisition date, accuracy information, and other details that are required or helpful to readers. For example, if you include data that you do not own in a map, you must give credit to the owner.

Use the Text tool in the Annotation tool palette to add labels and descriptive text to maps.

Typography and Lettering

The choice of type fonts and styles and how names are lettered can make the difference between a clear and attractive map and a jumble of imagery and text. As with many other aspects of map design, this is a very subjective area and many organizations already have guidelines to use. This section is intended as an introduction to the concepts involved and to convey traditional guidelines, where available.

Page 254: ERDAS Field Guide

224 Cartography

If your organization does not have a set of guidelines for the appearance of maps and you plan to produce many in the future, it would be beneficial to develop a style guide specifically for mapping. This ensures that all of the maps produced follow the same conventions, regardless of who actually makes the map.

ERDAS IMAGINE enables you to make map templates to facilitate the development of map standards within your organization.

Type StylesType style refers to the appearance of the text and may include font, size, and style (bold, italic, underline, etc.). Although the type styles used in maps are purely a matter of the designer’s taste, the following techniques help to make maps more legible (Robinson and Sale, 1969; Dent, 1985).

• Do not use too many different typefaces in a single map. Generally, one or two styles are enough when also using the variations of those type faces (e.g., bold, italic, underline, etc.). When using two typefaces, use a serif and a sans serif, rather than two different serif fonts or two different sans serif fonts [e.g., Sans (sans serif) and Roman (serif) could be used together in one map].

• Avoid ornate text styles because they can be difficult to read.

• Exercise caution in using very thin letters that may not reproduce well. On the other hand, using letters that are too bold may obscure important information in the image.

• Use different sizes of type for showing varying levels of importance. For example, on a map with city and town labels, city names are usually in a larger type size than the town names. Use no more than four to six different type sizes.

• Put more important text in labels, titles, and names in all capital letters and lesser important text in lowercase with initial capitals. This is a matter of personal preference, although names in which the letters must be spread out across a large area are better in all capital letters. (Studies have found that capital letters are more difficult to read, therefore lowercase letters might improve the legibility of the map.)

• In the past, hydrology, landform, and other natural features were labeled in italic. However, this is not strictly adhered to by map makers today, although water features are still nearly always labeled in italic.

Page 255: ERDAS Field Guide

Cartography 225

Figure 64: Sample Sans Serif and Serif Typefaces with Various Styles Applied

Use the Styles dialog to adjust the style of text.

LetteringLettering refers to the way in which place names and other labels are added to a map. Letter spacing, orientation, and position are the three most important factors in lettering. Here again, there are no set rules for how lettering is to appear. Much is determined by the purpose of the map and the end user. Many organizations have developed their own rules for lettering. Here is a list of guidelines that have been used by cartographers in the past (Robinson and Sale, 1969; Dent, 1985).

• Names should be either entirely on land or water—not overlapping both.

• Lettering should generally be oriented to match the orientation structure of the map. In large-scale maps this means parallel with the upper and lower edges, and in small-scale maps, this means in line with the parallels of latitude.

• Type should not be curved (i.e., different from preceding bullet) unless it is necessary to do so.

• If lettering must be disoriented, it should never be set in a straight line, but should always have a slight curve.

• Names should be letter spaced (i.e., space between individual letters, or kerning) as little as necessary.

• Where the continuity of names and other map data, such as lines and tones, conflicts with the lettering, the data, but not the names, should be interrupted.

• Lettering should never be upside-down in any respect.

Sans 10 pt regular

Sans 10 pt italic

Sans 10 pt bold

Sans 10 pt bold italic

SANS 10 PT ALL CAPS

Roman 10 pt regular

Roman 10 pt italic

Roman 10 pt bold

Roman 10 pt bold italic

ROMAN 10 PT ALL CAPS

Sans Serif Serif

Page 256: ERDAS Field Guide

226 Cartography

• Lettering that refers to point locations should be placed above or below the point, preferably above and to the right.

• The letters identifying linear features (roads, rivers, railroads, etc.) should not be spaced. The word(s) should be repeated along the feature as often as necessary to facilitate identification. These labels should be placed above the feature and river names should slant in the direction of the river flow (if the label is italic).

• For geographical names, use the native language of the intended map user. For an English-speaking audience, the name Germany should be used, rather than Deutscheland.

Figure 65: Good Lettering vs. Bad Lettering

Text ColorMany cartographers argue that all lettering on a map should be black. However, the map may be well-served by incorporating color into its design. In fact, studies have shown that coding labels by color can improve a reader’s ability to find information (Dent, 1985).

Projections

This section is adapted from “Map Projections for Use with the Geographic Information System” by Lee and Walsh (Lee and Walsh, 1984).

A map projection is the manner in which the spherical surface of the Earth is represented on a flat (two-dimensional) surface. This can be accomplished by direct geometric projection or by a mathematically derived transformation. There are many kinds of projections, but all involve transfer of the distinctive global patterns of parallels of latitude and meridians of longitude onto an easily flattened surface, or developable surface.

Atlanta

G E O R G I A

Good

Atlanta

G e o r g i aSavannah

Savannah

Bad

Page 257: ERDAS Field Guide

Cartography 227

The three most common developable surfaces are the cylinder, cone, and plane (Figure 66 on page 229). A plane is already flat, while a cylinder or cone may be cut and laid out flat, without stretching. Thus, map projections may be classified into three general families: cylindrical, conical, and azimuthal or planar.

Map projections are selected in the Projection Chooser. For more information about the Projection Chooser, see the ERDAS IMAGINE On-Line Help.

Properties of Map Projections

Regardless of what type of projection is used, it is inevitable that some error or distortion occurs in transforming a spherical surface into a flat surface. Ideally, a distortion-free map has four valuable properties:

• conformality

• equivalence

• equidistance

• true direction

Each of these properties is explained below. No map projection can be true in all of these properties. Therefore, each projection is devised to be true in selected properties, or most often, a compromise among selected properties. Projections that compromise in this manner are known as compromise projections. Conformality is the characteristic of true shape, wherein a projection preserves the shape of any small geographical area. This is accomplished by exact transformation of angles around points. One necessary condition is the perpendicular intersection of grid lines as on the globe. The property of conformality is important in maps which are used for analyzing, guiding, or recording motion, as in navigation. A conformal map or projection is one that has the property of true shape. Equivalence is the characteristic of equal area, meaning that areas on one portion of a map are in scale with areas in any other portion. Preservation of equivalence involves inexact transformation of angles around points and thus, is mutually exclusive with conformality except along one or two selected lines. The property of equivalence is important in maps that are used for comparing density and distribution data, as in populations.

Page 258: ERDAS Field Guide

228 Cartography

Equidistance is the characteristic of true distance measuring. The scale of distance is constant over the entire map. This property can be fulfilled on any given map from one, or at most two, points in any direction or along certain lines. Equidistance is important in maps that are used for analyzing measurements (i.e., road distances). Typically, reference lines such as the equator or a meridian are chosen to have equidistance and are termed standard parallels or standard meridians. True direction is characterized by a direction line between two points that crosses reference lines (e.g., meridians) at a constant angle or azimuth. An azimuth is an angle measured clockwise from a meridian, going north to east. The line of constant or equal direction is termed a rhumb line. The property of constant direction makes it comparatively easy to chart a navigational course. However, on a spherical surface, the shortest surface distance between two points is not a rhumb line, but a great circle, being an arc of a circle whose center is the center of the Earth. Along a great circle, azimuths constantly change (unless the great circle is the equator or a meridian). Thus, a more desirable property than true direction may be where great circles are represented by straight lines. This characteristic is most important in aviation. Note that all meridians are great circles, but the only parallel that is a great circle is the equator.

Page 259: ERDAS Field Guide

Cartography 229

Figure 66: Projection Types

Projection Types Although a great number of projections have been devised, the majority of them are geometric or mathematical variants of the basic direct geometric projection families described below. Choice of the projection to be used depends upon the true property or combination of properties desired for effective cartographic analysis.

Azimuthal ProjectionsAzimuthal projections, also called planar projections, are accomplished by drawing lines from a given perspective point through the globe onto a tangent plane. This is conceptually equivalent to tracing a shadow of a figure cast by a light source. A tangent plane intersects the global surface at only one point and is perpendicular to a line passing through the center of the sphere. Thus, these projections are symmetrical around a chosen center or central meridian. Choice of the projection center determines the aspect, or orientation, of the projection surface.

Regular Cylindrical

Transverse Cylindrical

Oblique Cylindrical

Regular Conic

Polar Azimuthal(planar)

Oblique Azimuthal(planar)

Page 260: ERDAS Field Guide

230 Cartography

Azimuthal projections may be centered:

• on the poles (polar aspect)

• at a point on the equator (equatorial aspect)

• at any other orientation (oblique aspect)

The origin of the projection lines—that is, the perspective point—may also assume various positions. For example, it may be:

• the center of the Earth (gnomonic)

• an infinite distance away (orthographic)

• on the Earth’s surface, opposite the projection plane (stereographic)

Conical Projections Conical projections are accomplished by intersecting, or touching, a cone with the global surface and mathematically projecting lines onto this developable surface. A tangent cone intersects the global surface to form a circle. Along this line of intersection, the map is error-free and possess equidistance. Usually, this line is a parallel, termed the standard parallel. Cones may also be secant, and intersect the global surface, forming two circles that possess equidistance. In this case, the cone slices underneath the global surface, between the standard parallels. Note that the use of the word secant, in this instance, is only conceptual and not geometrically accurate. Conceptually, the conical aspect may be polar, equatorial, or oblique. Only polar conical projections are supported in ERDAS IMAGINE.

Figure 67: Tangent and Secant Cones

Secanttwo standard parallels

Tangentone standard parallel

Page 261: ERDAS Field Guide

Cartography 231

Cylindrical ProjectionsCylindrical projections are accomplished by intersecting, or touching, a cylinder with the global surface. The surface is mathematically projected onto the cylinder, which is then cut and unrolled.A tangent cylinder intersects the global surface on only one line to form a circle, as with a tangent cone. This central line of the projection is commonly the equator and possesses equidistance. If the cylinder is rotated 90 degrees from the vertical (i.e., the long axis becomes horizontal), then the aspect becomes transverse, wherein the central line of the projection becomes a chosen standard meridian as opposed to a standard parallel. A secant cylinder, one slightly less in diameter than the globe, has two lines possessing equidistance.

Figure 68: Tangent and Secant Cylinders

Perhaps the most famous cylindrical projection is the Mercator, which became the standard navigational map. Mercator possesses true direction and conformality.

Other ProjectionsThe projections discussed so far are projections that are created by projecting from a sphere (the Earth) onto a plane, cone, or cylinder. Many other projections cannot be created so easily.Modified projections are modified versions of another projection. For example, the Space Oblique Mercator projection is a modification of the Mercator projection. These modifications are made to reduce distortion, often by including additional standard lines or a different pattern of distortion.Pseudo projections have only some of the characteristics of another class projection. For example, the Sinusoidal is called a pseudocylindrical projection because all lines of latitude are straight and parallel, and all meridians are equally spaced. However, it cannot truly be a cylindrical projection, because all meridians except the central meridian are curved. This results in the Earth appearing oval instead of rectangular (Environmental Systems Research Institute, 1991).

Tangentone standard parallel

Secanttwo standard parallels

Page 262: ERDAS Field Guide

232 Cartography

Geographical and Planar Coordinates

Map projections require a point of reference on the Earth’s surface. Most often this is the center, or origin, of the projection. This point is defined in two coordinate systems:

• geographical

• planar

GeographicalGeographical, or spherical, coordinates are based on the network of latitude and longitude (Lat/Lon) lines that make up the graticule of the Earth. Within the graticule, lines of longitude are called meridians, which run north/south, with the prime meridian at 0° (Greenwich, England). Meridians are designated as 0° to 180°, east or west of the prime meridian. The 180° meridian (opposite the prime meridian) is the International Dateline. Lines of latitude are called parallels, which run east/west. Parallels are designated as 0° at the equator to 90° at the poles. The equator is the largest parallel. Latitude and longitude are defined with respect to an origin located at the intersection of the equator and the prime meridian. Lat/Lon coordinates are reported in degrees, minutes, and seconds. Map projections are various arrangements of the Earth’s latitude and longitude lines onto a plane.

PlanarPlanar, or Cartesian, coordinates are defined by a column and row position on a planar grid (X,Y). The origin of a planar coordinate system is typically located south and west of the origin of the projection. Coordinates increase from 0,0 going east and north. The origin of the projection, being a false origin, is defined by values of false easting and false northing. Grid references always contain an even number of digits, and the first half refers to the easting and the second half the northing.In practice, this eliminates negative coordinate values and allows locations on a map projection to be defined by positive coordinate pairs. Values of false easting are read first and may be in meters or feet.

Available Map Projections

In ERDAS IMAGINE, map projection information appears in the Projection Chooser, which is used to georeference images and to convert map coordinates from one type of projection to another. The Projection Chooser provides the following projections:

USGS Projections

• Alaska Conformal

• Albers Conical Equal Area

Page 263: ERDAS Field Guide

Cartography 233

• Azimuthal Equidistant

• Behrmann

• Bonne

• Cassini

• Eckert I

• Eckert II

• Eckert III

• Eckert IV

• Eckert V

• Eckert VI

• EOSAT SOM

• Equidistant Conic

• Equidistant Cylindrical

• Equirectangular (Plate Carrée)

• Gall Stereographic

• Gauss Kruger

• General Vertical Near-side Perspective

• Geographic (Lat/Lon)

• Gnomonic

• Hammer

• Interrupted Goode Homolosine

• Interrupted Mollweide

• Lambert Azimuthal Equal Area

• Lambert Conformal Conic

• Loximuthal

Page 264: ERDAS Field Guide

234 Cartography

• Mercator

• Miller Cylindrical

• Modified Transverse Mercator

• Mollweide

• New Zealand Map Grid

• Oblated Equal Area

• Oblique Mercator (Hotine)

• Orthographic

• Plate Carrée

• Polar Stereographic

• Polyconic

• Quartic Authalic

• Robinson

• RSO

• Sinusoidal

• Space Oblique Mercator

• Space Oblique Mercator (Formats A & B)

• State Plane

• Stereographic

• Stereographic (Extended)

• Transverse Mercator

• Two Point Equidistant

• UTM

• Van der Grinten I

• Wagner IV

Page 265: ERDAS Field Guide

Cartography 235

• Wagner VII

• Winkel I

External Projections

• Albers Equal Area (see Albers Conical Equal Area on page 303)

• Azimuthal Equidistant (see Azimuthal Equidistant on page 306)

• Bipolar Oblique Conic Conformal

• Cassini-Soldner

• Conic Equidistant (see Equidistant Conic on page 333)

• Laborde Oblique Mercator

• Lambert Azimuthal Equal Area (see Lambert Azimuthal Equal Area on page 353)

• Lambert Conformal Conic (see Lambert Conformal Conic on page 356)

• Mercator (see Mercator on page 363)

• Minimum Error Conformal

• Modified Polyconic

• Modified Stereographic

• Mollweide Equal Area (see Mollweide on page 372)

• Oblique Mercator (see Oblique Mercator (Hotine) on page 376)

• Orthographic (see Orthographic on page 379)

• Plate Carrée (see Equirectangular (Plate Carrée) on page 336)

• Rectified Skew Orthomorphic (see RSO on page 392)

• Regular Polyconic (see Polyconic on page 386)

• Robinson Pseudocylindrical (see Robinson on page 390)

• Sinusoidal (see Sinusoidal on page 393)

• Southern Orientated Gauss Conformal

Page 266: ERDAS Field Guide

236 Cartography

• Stereographic (see Stereographic on page 408)

• Swiss Cylindrical

• Stereographic (Oblique) (see Stereographic on page 408)

• Transverse Mercator (see Transverse Mercator on page 412)

• Universal Transverse Mercator (see UTM on page 416)

• Van der Grinten (see Van der Grinten I on page 419)

• Winkel’s Tripel

Choice of the projection to be used depends upon the desired major property and the region to be mapped (see Table 43). After choosing the desired map projection, several parameters are required for its definition (see Table 42). These parameters fall into three general classes: (1) definition of the spheroid, (2) definition of the surface viewing window, and (3) definition of scale.For each map projection, a menu of spheroids displays, along with appropriate prompts that enable you to specify these parameters.

Units Use the units of measure that are appropriate for the map projection type.

• Lat/Lon coordinates are expressed in decimal degrees. When prompted, you can use the DD function to convert coordinates in degrees, minutes, seconds format to decimal. For example, for 30°51’12’’:

dd(30,51,12) = 30.85333 -dd(30,51,12) = -30.85333

or30:51:12 = 30.85333

You can also enter Lat/Lon coordinates in radians.

• State Plane coordinates are expressed in feet or meters.

• All other coordinates are expressed in meters.

Note also that values for longitude west of Greenwich, England, and values for latitude south of the equator are to be entered as negatives.

Page 267: ERDAS Field Guide

Cartography 237

Table 42: Map Projections

Type Map projection Construction Property Use

0 Geographic N/A N/A Data entry, spherical coordinates

1 Universal Transverse Mercator

Cylinder (see #9) Conformal Data entry, plane coordinates

2 State Plane (see #4, 7,9,20) Conformal Data entry, plane coordinates

3 Albers Conical Equal Area

Cone Equivalent Middle latitudes, E-W expanses

4 Lambert Conformal Conic

Cone Conformal True Direction

Middle latitudes, E-W expanses flight (straight great circles)

5 Mercator Cylinder Conformal True Direction

Nonpolar regions, navigation (straight rhumb lines)

6 Polar Stereographic Plane Conformal Polar regions

7 Polyconic Cone Compromise N-S expanses

8 Equidistant Conic Cone Equidistant Middle latitudes, E-W expanses

9 Transverse Mercator Cylinder Conformal N-S expanses

10 Stereographic Plane Conformal Hemispheres, continents

11 Lambert Azimuthal Equal Area

Plane Equivalent Square or round expanses

12 Azimuthal Equidistant Plane Equidistant Polar regions, radio/seismic work (straight great circles)

13 Gnomonic Plane Compromise Navigation, seismic work (straight great circles)

14 Orthographic Plane Compromise Globes, pictorial

15 General Vertical Near-Side Perspective

Plane Compromise Hemispheres or less

16 Sinusoidal Pseudo-Cylinder Equivalent N-S expanses or equatorial regions

17 Equirectangular Cylinder Compromise City maps, computer plotting (simplistic)

18 Miller Cylindrical Cylinder Compromise World maps

19 Van der Grinten I N/A Compromise World maps

20 Oblique Mercator Cylinder Conformal Oblique expanses (e.g., Hawaiian islands), satellite tracking

21 Space Oblique Mercator Cylinder Conformal Mapping of Landsat imagery

Page 268: ERDAS Field Guide

238 Cartography

22 Modified Transverse Mercator

Cylinder Conformal Alaska

Table 43: Projection Parameters

Projection type (#)1

Parameter 3 4 5 6 7 8b 9 10 11 12 13 14 15 16 17 18 19 202 21b 22

Definition of Spheroid

Spheroid selections

• • • • • • • • • • • • • • • • • • •

Definition of Surface Viewing Window

False easting • • • • • • • • • • • • • • • • • • • •

False northing • • • • • • • • • • • • • • • • • • • •

Longitude of central meridian

• • • • • • • • • •

Latitude of origin of projection

• • • • • •

Longitude of center of projection

• • • • • •

Latitude of center of projection

• • • • • •

Latitude of first standard parallel

• • •

Latitude of second standard parallel

• • •

Latitude of true scale • •

Longitude below pole •

Definition of Scale

Scale factor at central meridian

Height of perspective point above sphere

Scale factor at center of projection

Table 42: Map Projections

Type Map projection Construction Property Use

Page 269: ERDAS Field Guide

Cartography 239

1. Numbers are used for reference only and correspond to the numbers used in Table 42. Parameters for definition of map projection types 0-2 are not applicable and are described in the text.

2. Additional parameters required for definition of the map projection are described in the text of “Map Projec-tions” on page 297.

Page 270: ERDAS Field Guide

240 Cartography

Choosing a Map Projection

Map Projection Uses in a GIS

Selecting a map projection for the GIS database enables you to (Maling, 1992):

• decide how to best display the area of interest or illustrate the results of analysis

• register all imagery to a single coordinate system for easier comparisons

• test the accuracy of the information and perform measurements on the data

Deciding Factors Depending on your applications and the uses for the maps created, one or several map projections may be used. Many factors must be weighed when selecting a projection, including:

• type of map

• special properties that must be preserved

• types of data to be mapped

• map accuracy

• scale

If you are mapping a relatively small area, virtually any map projection is acceptable. In mapping large areas (entire countries, continents, and the world), the choice of map projection becomes more critical. In small areas, the amount of distortion in a particular projection is barely, if at all, noticeable. In large areas, there may be little or no distortion in the center of the map, but distortion increases outward toward the edges of the map.

Guidelines Since the sixteenth century, there have been three fundamental rules regarding map projection use (Maling, 1992):

• if the country to be mapped lies in the tropics, use a cylindrical projection

• if the country to be mapped lies in the temperate latitudes, use a conical projection

• if the map is required to show one of the polar regions, use an azimuthal projection

Page 271: ERDAS Field Guide

Cartography 241

These rules are no longer held so strongly. There are too many factors to consider in map projection selection for broad generalizations to be effective today. The purpose of a particular map and the merits of the individual projections must be examined before an educated choice can be made. However, there are some guidelines that may help you select a projection (Pearson, 1990):

• Statistical data should be displayed using an equal area projection to maintain proper proportions (although shape may be sacrificed).

• Equal area projections are well-suited to thematic data.

• Where shape is important, use a conformal projection.

Spheroids The previous discussion of direct geometric map projections assumes that the Earth is a sphere, and for many maps this is satisfactory. However, due to rotation of the Earth around its axis, the planet bulges slightly at the equator. This flattening of the sphere makes it an oblate spheroid, which is an ellipse rotated around its shorter axis.

Figure 69: Ellipse

An ellipse is defined by its semi-major (long) and semi-minor (short) axes. The amount of flattening of the Earth is expressed as the ratio:

Where:a = the equatorial radius (semi-major axis)b = the polar radius (semi-minor axis)

Most map projections use eccentricity (e2) rather than flattening. The relationship is:

Minor axis

Major axis

semi-minor axis

semi-major axis

f a b–( ) a⁄=

e2 2f f2–=

Page 272: ERDAS Field Guide

242 Cartography

The flattening of the Earth is about 1 part in 300, and becomes significant in map accuracy at a scale of 1:100,000 or larger. Calculation of a map projection requires definition of the spheroid (or ellipsoid) in terms of the length of axes and eccentricity squared (or radius of the reference sphere). Several principal spheroids are in use by one or more countries. Differences are due primarily to calculation of the spheroid for a particular region of the Earth’s surface. Only recently have satellite tracking data provided spheroid determinations for the entire Earth. However, these spheroids may not give the best fit for a particular region. In North America, the spheroid in use is the Clarke 1866 for NAD27 and GRS 1980 for NAD83 (State Plane). If other regions are to be mapped, different projections should be used. Upon choosing a desired projection type, you have the option to choose from the following list of spheroids:

• Airy

• Australian National

• Bessel

• Clarke 1866

• Clarke 1880

• Everest

• GRS 1980

• Helmert

• Hough

• International 1909

• Krasovsky

• Mercury 1960

• Modified Airy

• Modified Everest

• Modified Mercury 1968

• New International 1967

• Southeast Asia

Page 273: ERDAS Field Guide

Cartography 243

• Sphere of Nominal Radius of Earth

• Sphere of Radius 6370977m

• Walbeck

• WGS 66

• WGS 72

• WGS 84

The spheroids listed above are the most commonly used. There are many other spheroids available, and they are listed in the Projection Chooser. These additional spheroids are not documented in this manual. You can use the IMAGINE Developers’ Toolkit to add your own map projections and spheroids to ERDAS IMAGINE.

The semi-major and semi-minor axes of all supported spheroids are listed in Table 44, as well as the principal uses of these spheroids.

Table 44: Earth Spheroids for use with ERDAS IMAGINE

Spheroid Semi-Major Axis

Semi-Minor Axis Use

165 6378165.0 6356783.0 Global

Airy (1940) 6377563.0 6356256.91 England

Airy Modified (1849) Ireland

Australian National (1965) 6378160.0 6356774.719 Australia

Bessel (1841) 6377397.155 6356078.96284 Central Europe, Chile, and Indonesia

Bessell (Namibia) 6377483.865 6356165.383 Namibia

Clarke 1858 6378293.0 6356619.0 Global

Clarke 1866 6378206.4 6356583.8 North America and the Philippines

Clarke 1880 6378249.145 6356514.86955 France and Africa

Clarke 1880 IGN 6378249.2 6356515.0 Global

Everest (1830) 6377276.3452 6356075.4133 India, Burma, and Pakistan

Everest (1956) 6377301.243 6356100.2284 India, Nepal

Page 274: ERDAS Field Guide

244 Cartography

Everest (1969) 6377295.664 6356094.6679 Global

Everest (Malaysia & Singapore) 6377304.063 6356103.038993 Global

Everest (Pakistan) 6377309.613 6356108.570542 Pakistan

Everest (Sabah & Sarawak) 6377298.556 6356097.5503 Brunei, East Malaysia

Fischer (1960) 6378166.0 6356784.2836 Global

Fischer (1968) 6378150.0 6356768.3372 Global

GRS 1980 (Geodetic Reference System)

6378137.0 6356752.31414 Adopted in North America for 1983 Earth-centered coordinate system (satellite)

Hayford 6378388.0 6356911.946128 Global

Helmert 6378200.0 6356818.16962789092

Egypt

Hough 6378270.0 6356794.343479 As International 1909 above, with modification of ellipse axes

IAU 1965 6378160.0 6356775.0 Global

Indonesian 1974 6378160.0 6356774.504086 Global

International 1909 (= Hayford) 6378388.0 6356911.94613 Remaining parts of the world not listed here

IUGG 1967 6378160.0 6356774.516 Hungary

Krasovsky (1940) 6378245.0 6356863.0188 Former Soviet Union and some East European countries

Mercury 1960 6378166.0 6356794.283666 Early satellite, rarely used

Modified Airy 6377341.89 6356036.143 As Airy above, more recent version

Modified Everest 6377304.063 6356103.039 As Everest above, more recent version

Modified Mercury 1968 6378150.0 6356768.337303 As Mercury 1960 above, more recent calculation

Modified Fischer (1960) 6378155.0 6356773.3205 Singapore

New International 1967 6378157.5 6356772.2 As International 1909 below, more recent calculation

SGS 85 (Soviet Geodetic System 1985)

6378136.0 6356751.3016 Soviet Union

South American (1969) 6378160.0 6356774.7192 South America

Table 44: Earth Spheroids for use with ERDAS IMAGINE

Spheroid Semi-Major Axis

Semi-Minor Axis Use

Page 275: ERDAS Field Guide

Cartography 245

Southeast Asia 6378155.0 6356773.3205 As named

Sphere 6371000.0 6371000.0 Global

Sphere of Nominal Radius of Earth 6370997.0 6370997.0 A perfect sphere

Sphere of Radius 6370997 m 6370997.0 6370997.0 A perfect sphere with the same surface area as the Clarke 1866 spheroid

Walbeck (1819) 6376896.0 6355834.8467 Soviet Union, up to 1910

WGS 60 (World Geodetic System 1960)

6378165.0 6356783.287 Global

WGS 66 (World Geodetic System 1966)

6378145.0 6356759.769356 As WGS 72 above, older version

WGS 72 (World Geodetic System 1972)

6378135.0 6356750.519915 NASA (satellite)

WGS 84 (World Geodetic System 1984)

6378137.0 6356752.31424517929

As WGS 72, more recent calculation

Table 44: Earth Spheroids for use with ERDAS IMAGINE

Spheroid Semi-Major Axis

Semi-Minor Axis Use

Page 276: ERDAS Field Guide

246 Cartography

Non-Earth Spheroids Spheroid models can be applied to planetary bodies other than the Earth, such as the Moon, Venus, Mars, various asteroids, and other planets in our Solar System. Spheroids for these planetary bodies have a defined semi-major axis and a semi-minor axis, measured in meters, corresponding to Earth spheroids.

See Figure 69 on page 241 for an illustration of the axes defined in an ellipse.

The semi-major and semi-minor axes of the supported extraterrestrial spheroids are listed in the following table.

Map Composition

Learning Map Composition

Cartography and map composition may seem like an entirely new discipline to many GIS and image processing analysts—and that is partly true. But, by learning the basics of map design, the results of your analyses can be communicated much more effectively. Map composition is also much easier than in the past when maps were hand drawn. Many GIS analysts may already know more about cartography than they realize, simply because they have access to map-making software. Perhaps the first maps you made were imitations of existing maps, but that is how we learn. This chapter is certainly not a textbook on cartography; it is merely an overview of some of the issues involved in creating cartographically-correct products.

Table 45: Non-Earth Spheroids for use with ERDAS IMAGINE

Spheroid Semi-Major Axis Semi-Minor Axis

Moon 1738100.0 1736000.0

Mercury 2439700.0 2439700.0

Venus 6051800.0 6051800.0

Mars 3396200.0 3376200.0

Jupiter 71492000.0 66854000.0

Saturn 60268000.0 54364000.0

Uranus 25559000.0 24973000.0

Neptune 24764000.0 24341000.0

Pluto 1195000.0 1195000.0

Page 277: ERDAS Field Guide

Cartography 247

Plan the Map After your analysis is complete, you can begin map composition. The first step in creating a map is to plan its contents and layout. The following questions may aid in the planning process:

• How is this map going to be used?

• Will the map have a single theme or many?

• Is this a single map, or is it part of a series of similar maps?

• Who is the intended audience? What is the level of their knowledge about the subject matter?

• Will it remain in digital form and be viewed on the computer screen or will it be printed?

• If it is going to be printed, how big will it be? Will it be printed in color or black and white?

• Are there map guidelines already set up by your organization?

The answers to these questions can help to determine the type of information that must go into the composition and the layout of that information. For example, suppose you are going to do a series of maps about global deforestation for presentation to Congress, and you are going to print these maps in color on an inkjet printer. This scenario might lead to the following conclusions:

• A format (layout) should be developed for the series, so that all the maps produced have the same style.

• The colors used should be chosen carefully, since the maps are printed in color.

• Political boundaries might need to be included, since they influence the types of actions that can be taken in each deforested area.

• The typeface size and style to be used for titles, captions, and labels have to be larger than for maps printed on 8.5” × 11.0” sheets. The type styles selected should be the same for all maps.

• Select symbols that are widely recognized, and make sure they are all explained in a legend.

• Cultural features (roads, urban centers, etc.) may be added for locational reference.

• Include a statement about the accuracy of each map, since these maps may be used in very high-level decisions.

Page 278: ERDAS Field Guide

248 Cartography

Once this information is in hand, you can actually begin sketching the look of the map on a sheet of paper. It is helpful for you to know how you want the map to look before starting the ERDAS IMAGINE Map Composer. Doing so ensures that all of the necessary data layers are available, and makes the composition phase go quickly.

See the tour guide about Map Composer in the ERDAS IMAGINE Tour Guides for step-by-step instructions on creating a map. Refer to the On-Line Help for details about how Map Composer works.

Map Accuracy Maps are often used to influence legislation, promote a cause, or enlighten a particular group before decisions are made. In these cases, especially, map accuracy is of the utmost importance. There are many factors that influence map accuracy: the projection used, scale, base data, generalization, etc. The analyst/cartographer must be aware of these factors before map production begins. The accuracy of the map, in a large part, determines its usefulness. It is usually up to individual organizations to perform accuracy assessment and decide how those findings are reflected in the products they produce. However, several agencies have established guidelines for map makers.

US National Map Accuracy Standard

The United States Bureau of the Budget has developed the US National Map Accuracy Standard in an effort to standardize accuracy reporting on maps. These guidelines are summarized below (Fisher, 1991):

• On scales smaller than 1:20,000, not more than 10 percent of points tested should be more than 1/50 inch in horizontal error, where points refer only to points that can be well-defined on the ground.

• On maps with scales larger than 1:20,000, the corresponding error term is 1/30 inch.

• At no more than 10 percent of the elevations tested can contours be in error by more than one half of the contour interval.

• Accuracy should be tested by comparison of actual map data with survey data of higher accuracy (not necessarily with ground truth).

• If maps have been tested and do meet these standards, a statement should be made to that effect in the legend.

• Maps that have been tested but fail to meet the requirements should omit all mention of the standards on the legend.

Page 279: ERDAS Field Guide

Cartography 249

USGS Land Use and Land Cover Map Guidelines

The USGS has set standards of their own for land use and land cover maps (Fisher, 1991):

• The minimum level of accuracy in identifying land use and land cover categories is 85%.

• The several categories shown should have about the same accuracy.

• Accuracy should be maintained between interpreters and times of sensing.

USDA SCS Soils Maps Guidelines

The United States Department of Agriculture (USDA) has set standards for Soil Conservation Service (SCS) soils maps (Fisher, 1991):

• Up to 25% of the pedons may be of other soil types than those named if they do not present a major hindrance to land management.

• Up to only 10% of pedons may be of other soil types than those named if they do present a major hindrance to land management.

• No single included soil type may occupy more than 10% of the area of the map unit.

Digitized Hardcopy Maps Another method of expanding the database is by digitizing existing hardcopy maps. Although this may seem like an easy way to gather more information, care must be taken in pursuing this avenue if it is necessary to maintain a particular level of accuracy. If the hardcopy maps that are digitized are outdated, or were not produced using the same accuracy standards that are currently in use, the digitized map may negatively influence the overall accuracy of the database.

Page 280: ERDAS Field Guide

250 Cartography

Page 281: ERDAS Field Guide

251Rectification

Rectification 251

Rectification

Introduction Raw, remotely sensed image data gathered by a satellite or aircraft are representations of the irregular surface of the Earth. Even images of seemingly flat areas are distorted by both the curvature of the Earth and the sensor being used. This chapter covers the processes of geometrically correcting an image so that it can be represented on a planar surface, conform to other images, and have the integrity of a map.A map projection system is any system designed to represent the surface of a sphere or spheroid (such as the Earth) on a plane. There are a number of different map projection methods. Since flattening a sphere to a plane causes distortions to the surface, each map projection system compromises accuracy between certain properties, such as conservation of distance, angle, or area. For example, in equal area map projections, a circle of a specified diameter drawn at any location on the map represents the same total area. This is useful for comparing land use area, density, and many other applications. However, to maintain equal area, the shapes, angles, and scale in parts of the map may be distorted (Jensen, 1996).There are a number of map coordinate systems for determining location on an image. These coordinate systems conform to a grid, and are expressed as X,Y (column, row) pairs of numbers. Each map projection system is associated with a map coordinate system.Rectification is the process of transforming the data from one grid system into another grid system using a geometric transformation. While polynomial transformation and triangle-based methods are described in this chapter, discussion about various rectification techniques can be found in Yang (Yang, 1997). Since the pixels of the new grid may not align with the pixels of the original grid, the pixels must be resampled. Resampling is the process of extrapolating data values for the pixels on the new grid from the values of the source pixels.

Registration In many cases, images of one area that are collected from different sources must be used together. To be able to compare separate images pixel by pixel, the pixel grids of each image must conform to the other images in the data base. The tools for rectifying image data are used to transform disparate images to the same coordinate system.Registration is the process of making an image conform to another image. A map coordinate system is not necessarily involved. For example, if image A is not rectified and it is being used with image B, then image B must be registered to image A so that they conform to each other. In this example, image A is not rectified to a particular map projection, so there is no need to rectify image B to a map projection.

Page 282: ERDAS Field Guide

252 Rectification

Georeferencing Georeferencing refers to the process of assigning map coordinates to image data. The image data may already be projected onto the desired plane, but not yet referenced to the proper coordinate system. Rectification, by definition, involves georeferencing, since all map projection systems are associated with map coordinates. Image-to-image registration involves georeferencing only if the reference image is already georeferenced. Georeferencing, by itself, involves changing only the map coordinate information in the image file. The grid of the image does not change.Geocoded data are images that have been rectified to a particular map projection and pixel size, and usually have had radiometric corrections applied. It is possible to purchase image data that is already geocoded. Geocoded data should be rectified only if they must conform to a different projection system or be registered to other rectified data.

Latitude/Longitude Lat/Lon is a spherical coordinate system that is not associated with a map projection. Lat/Lon expresses locations in the terms of a spheroid, not a plane. Therefore, an image is not usually rectified to Lat/Lon, although it is possible to convert images to Lat/Lon, and some tips for doing so are included in this chapter.

You can view map projection information for a particular file using the Image Information utility. Image Information allows you to modify map information that is incorrect. However, you cannot rectify data using Image Information. You must use the Rectification tools described in this chapter.

The properties of map projections and of particular map projection systems are discussed in "Cartography" on page 211 and "Map Projections" on page 297.

Orthorectification Orthorectification is a form of rectification that corrects for terrain displacement and can be used if there is a DEM of the study area. It is based on collinearity equations, which can be derived by using 3D GCPs. In relatively flat areas, orthorectification is not necessary, but in mountainous areas (or on aerial photographs of buildings), where a high degree of accuracy is required, orthorectification is recommended.

See "Photogrammetric Concepts" on page 595 for more information on orthocorrection.

Page 283: ERDAS Field Guide

Rectification 253

When to Rectify Rectification is necessary in cases where the pixel grid of the image must be changed to fit a map projection system or a reference image. There are several reasons for rectifying image data:

• comparing pixels scene to scene in applications, such as change detection or thermal inertia mapping (day and night comparison)

• developing GIS data bases for GIS modeling

• identifying training samples according to map coordinates prior to classification

• creating accurate scaled photomaps

• overlaying an image with vector data, such as ArcInfo

• comparing images that are originally at different scales

• extracting accurate distance and area measurements

• mosaicking images

• performing any other analyses requiring precise geographic locations

Before rectifying the data, you must determine the appropriate coordinate system for the data base. To select the optimum map projection and coordinate system, the primary use for the data base must be considered.If you are doing a government project, the projection may be predetermined. A commonly used projection in the United States government is State Plane. Use an equal area projection for thematic or distribution maps and conformal or equal area projections for presentation maps. Before selecting a map projection, consider the following:

• How large or small an area is mapped? Different projections are intended for different size areas.

• Where on the globe is the study area? Polar regions and equatorial regions require different projections for maximum accuracy.

• What is the extent of the study area? Circular, north-south, east-west, and oblique areas may all require different projection systems (Environmental Systems Research Institute, 1992).

Page 284: ERDAS Field Guide

254 Rectification

When to Georeference Only

Rectification is not necessary if there is no distortion in the image. For example, if an image file is produced by scanning or digitizing a paper map that is in the desired projection system, then that image is already planar and does not require rectification unless there is some skew or rotation of the image. Scanning and digitizing produce images that are planar, but do not contain any map coordinate information. These images need only to be georeferenced, which is a much simpler process than rectification. In many cases, the image header can simply be updated with new map coordinate information. This involves redefining:

• the map coordinate of the upper left corner of the image

• the cell size (the area represented by each pixel)

This information is usually the same for each layer of an image file, although it could be different. For example, the cell size of band 6 of Landsat TM data is different than the cell size of the other bands.

Use the Image Information utility to modify image file header information that is incorrect.

Disadvantages of Rectification

During rectification, the data file values of rectified pixels must be resampled to fit into a new grid of pixel rows and columns. Although some of the algorithms for calculating these values are highly reliable, some spectral integrity of the data can be lost during rectification. If map coordinates or map units are not needed in the application, then it may be wiser not to rectify the image. An unrectified image is more spectrally correct than a rectified image.

ClassificationSome analysts recommend classification before rectification, since the classification is then based on the original data values. Another benefit is that a thematic file has only one band to rectify instead of the multiple bands of a continuous file. On the other hand, it may be beneficial to rectify the data first, especially when using GPS data for the GCPs. Since these data are very accurate, the classification may be more accurate if the new coordinates help to locate better training samples.

Thematic FilesNearest neighbor is the only appropriate resampling method for thematic files, which may be a drawback in some applications. The available resampling methods are discussed in detail later in this chapter.

Page 285: ERDAS Field Guide

Rectification 255

Rectification Steps NOTE: Registration and rectification involve similar sets of procedures. Throughout this documentation, many references to rectification also apply to image-to-image registration.

Usually, rectification is the conversion of data file coordinates to some other grid and coordinate system, called a reference system. Rectifying or registering image data on disk involves the following general steps, regardless of the application:

1. Locate GCPs.

2. Compute and test a transformation.

3. Create an output image file with the new coordinate information in the header. The pixels must be resampled to conform to the new grid.Images can be rectified on the display (in a Viewer) or on the disk. Display rectification is temporary, but disk rectification is permanent, because a new file is created. Disk rectification involves:

• rearranging the pixels of the image onto a new grid, which conforms to a plane in the new map projection and coordinate system

• inserting new information to the header of the file, such as the upper left corner map coordinates and the area represented by each pixel

Ground Control Points

GCPs are specific pixels in an image for which the output map coordinates (or other output coordinates) are known. GCPs consist of two X,Y pairs of coordinates:

• source coordinates—usually data file coordinates in the image being rectified

• reference coordinates—the coordinates of the map or reference image to which the source image is being registered

The term map coordinates is sometimes used loosely to apply to reference coordinates and rectified coordinates. These coordinates are not limited to map coordinates. For example, in image-to-image registration, map coordinates are not necessary.

GCPs in ERDAS IMAGINE

Any ERDAS IMAGINE image can have one GCP set associated with it. The GCP set is stored in the image file along with the raster layers. If a GCP set exists for the top layer that is displayed in the Viewer, then those GCPs can be displayed when the Multipoint Geometric Correction tool (IMAGINE ribbon Workspace) or GCP Tool (Classic) is opened.

Page 286: ERDAS Field Guide

256 Rectification

In the CellArray of GCP data that displays in the Multipoint Geometric Correction tool or GCP Tool, one column shows the point ID of each GCP. The point ID is a name given to GCPs in separate files that represent the same geographic location. Such GCPs are called corresponding GCPs.A default point ID string is provided (such as GCP #1), but you can enter your own unique ID strings to set up corresponding GCPs as needed. Even though only one set of GCPs is associated with an image file, one GCP set can include GCPs for a number of rectifications by changing the point IDs for different groups of corresponding GCPs.

Entering GCPs Accurate GCPs are essential for an accurate rectification. From the GCPs, the rectified coordinates for all other points in the image are extrapolated. Select many GCPs throughout the scene. The more dispersed the GCPs are, the more reliable the rectification is. GCPs for large-scale imagery might include the intersection of two roads, airport runways, utility corridors, towers, or buildings. For small-scale imagery, larger features such as urban areas or geologic features may be used. Landmarks that can vary (for example, the edges of lakes or other water bodies, vegetation, and so forth) should not be used.The source and reference coordinates of the GCPs can be entered in the following ways:

• They may be known a priori, and entered at the keyboard.

• Use the mouse to select a pixel from an image in the Viewer. With both the source and reference Viewers open, enter source coordinates and reference coordinates for image-to-image registration. The Multipoint Geometric Correction tool contains the both the source and reference Viewers within the tool.

• Use an existing Ground Control Coordinates file (.gcc file extension). This file contains the X and Y coordinates along with the GCP point ID, saved as an external file.

• Use a digitizing tablet to register an image to a hardcopy map.

Information on the use and setup of a digitizing tablet is discussed in "Vector Data" on page 41.

Page 287: ERDAS Field Guide

Rectification 257

Digitizing Tablet OptionIf GCPs are digitized from a hardcopy map and a digitizing tablet, accurate base maps must be collected. You should try to match the resolution of the imagery with the scale and projection of the source map. For example, 1:24,000 scale USGS quadrangles make good base maps for rectifying Landsat TM and SPOT imagery. Avoid using maps over 1:250,000, if possible. Coarser maps (that is, 1:250,000) are more suitable for imagery of lower resolution (that is, AVHRR) and finer base maps (that is, 1:24,000) are more suitable for imagery of finer resolution (that is, Landsat and SPOT).

Mouse OptionWhen entering GCPs with the mouse, you should try to match coarser resolution imagery to finer resolution imagery (that is, Landsat TM to SPOT), and avoid stretching resolution spans greater than a cubic convolution radius (a 4 × 4 area). In other words, you should not try to match Landsat MSS to SPOT or Landsat TM to an aerial photograph.

How GCPs are StoredGCPs entered with the mouse are stored in the image file, and those entered at the keyboard or digitized using a digitizing tablet are stored in a separate file with the extension .gcc. GCPs entered with the mouse can also be saved as a separate *.gcc file.

GCP Prediction and Matching

Automated GCP prediction enables you to pick a GCP in either coordinate system and automatically locate that point in the other coordinate system based on the current transformation parameters.Automated GCP matching is a step beyond GCP prediction. For image-to-image rectification, a GCP selected in one image is precisely matched to its counterpart in the other image using the spectral characteristics of the data and the geometric transformation. GCP matching enables you to fine tune a rectification for highly accurate results.Both of these methods require an existing transformation which consists of a set of coefficients used to convert the coordinates from one system to another.

Page 288: ERDAS Field Guide

258 Rectification

GCP PredictionGCP prediction is a useful technique to help determine if enough GCPs have been gathered. After selecting several GCPs, select a point in either the source or the destination image, then use GCP prediction to locate the corresponding GCP on the other image (map). This point is determined based on the current transformation derived from existing GCPs. Examine the automatically generated point and see how accurate it is. If it is within an acceptable range of accuracy, then there may be enough GCPs to perform an accurate rectification (depending upon how evenly dispersed the GCPs are). If the automatically generated point is not accurate, then more GCPs should be gathered before rectifying the image. GCP prediction can also be used when applying an existing transformation to another image in a data set. This saves time in selecting another set of GCPs by hand. Once the GCPs are automatically selected, those that do not meet an acceptable level of error can be edited.

GCP MatchingIn GCP matching, you can select which layers from the source and destination images to use. Since the matching process is based on the reflectance values, select layers that have similar spectral wavelengths, such as two visible bands or two infrared bands. You can perform histogram matching to ensure that there is no offset between the images. You can also select the radius from the predicted GCP from which the matching operation searches for a spectrally similar pixels. The search window can be any odd size between 5 × 5 and 21 × 21.

Histogram matching is discussed in "Enhancement" on page 455.

A correlation threshold is used to accept or discard points. The correlation ranges from -1.000 to +1.000. The threshold is an absolute value threshold ranging from 0.000 to 1.000. A value of 0.000 indicates a bad match and a value of 1.000 indicates an exact match. Values above 0.8000 or 0.9000 are recommended. If a match cannot be made because the absolute value of the correlation is less than the threshold, you have the option to discard points.

Polynomial Transformation

Polynomial equations are used to convert source file coordinates to rectified map coordinates. Depending upon the distortion in the imagery, the number of GCPs used, and their locations relative to one another, complex polynomial equations may be required to express the needed transformation. The degree of complexity of the polynomial is expressed as the order of the polynomial. The order is simply the highest exponent used in the polynomial.

Page 289: ERDAS Field Guide

Rectification 259

The order of transformation is the order of the polynomial used in the transformation. ERDAS IMAGINE allows 1st- through nth-order transformations. Usually, 1st-order or 2nd-order transformations are used.

You can specify the order of the transformation you want to use in the Transform Editor.

A discussion of polynomials and order is included in "Math Topics" on page 697.

Transformation MatrixA transformation matrix is computed from the GCPs. The matrix consists of coefficients that are used in polynomial equations to convert the coordinates. The size of the matrix depends upon the order of transformation. The goal in calculating the coefficients of the transformation matrix is to derive the polynomial equations for which there is the least possible amount of error when they are used to transform the reference coordinates of the GCPs into the source coordinates. It is not always possible to derive coefficients that produce no error. For example, in Figure 70, GCPs are plotted on a graph and compared to the curve that is expressed by a polynomial.

Figure 70: Polynomial Curve vs. GCPs

Every GCP influences the coefficients, even if there is not a perfect fit of each GCP to the polynomial that the coefficients represent. The distance between the GCP reference coordinate and the curve is called RMS error, which is discussed later in this chapter. The least squares regression method is used to calculate the transformation matrix from the GCPs. This common method is discussed in statistics textbooks.

Linear Transformations A 1st-order transformation is a linear transformation. It can change:

• location in X and/or Y

Source X coordinate

Ref

eren

ce X

coo

rdin

ate

GCPPolynomial curve

Page 290: ERDAS Field Guide

260 Rectification

• scale in X and/or Y

• skew in X and/or Y

• rotation

First-order transformations can be used to project raw imagery to a planar map projection, convert a planar map projection to another planar map projection, and when rectifying relatively small image areas. You can perform simple linear transformations to an image displayed in a Viewer or to the transformation matrix itself. Linear transformations may be required before collecting GCPs on the displayed image. You can reorient skewed Landsat TM data, rotate scanned quad sheets according to the angle of declination stated in the legend, and rotate descending data so that north is up. A 1st-order transformation can also be used for data that are already projected onto a plane. For example, SPOT and Landsat Level 1B data are already transformed to a plane, but may not be rectified to the desired map projection. When doing this type of rectification, it is not advisable to increase the order of transformation if at first a high RMS error occurs. Examine other factors first, such as the GCP source and distribution, and look for systematic errors.ERDAS IMAGINE provides the following options for 1st-order transformations:

• scale

• offset

• rotate

• reflect

ScaleScale is the same as the zoom option in the Viewer, except that you can specify different scaling factors for X and Y.

If you are scaling an image in the Viewer, the zoom option undoes any changes to the scale that you do, and vice versa.

OffsetOffset moves the image by a user-specified number of pixels in the X and Y directions. For rotation, you can specify any positive or negative number of degrees for clockwise and counterclockwise rotation. Rotation occurs around the center pixel of the image.

Page 291: ERDAS Field Guide

Rectification 261

ReflectionReflection options enable you to perform the following operations:

• left to right reflection

• top to bottom reflection

• left to right and top to bottom reflection (equal to a 180° rotation)

Linear adjustments are available from the Viewer or from the Transform Editor. You can perform linear transformations in the Viewer and then load that transformation to the Transform Editor, or you can perform the linear transformations directly on the transformation matrix.

Figure 71 illustrates how the data are changed in linear transformations.

Figure 71: Linear Transformations

The transformation matrix for a 1st-order transformation consists of six coefficients—three for each coordinate (X and Y).

a0 a1 a2

b0 b1 b2

Coefficients are used in a 1st-order polynomial as follows:

original image change of scale in X change of scale in Y

change of skew in X change of skew in Y rotation

xo a0 a1x a2y+ +=yo b0 b1x b2y+ +=

Page 292: ERDAS Field Guide

262 Rectification

Where:x and y are source coordinates (input)xo and yo are rectified coordinates (output)

The coefficients of the transformation matrix are as above

The position of the coefficients in the matrix and the assignment of the coefficients in the polynomial is an ERDAS IMAGINE convention. Other representations of a 1st-order transformation matrix may take a different form.

Nonlinear Transformations

Transformations of the 2nd-order or higher are nonlinear transformations. These transformations can correct nonlinear distortions. The process of correcting nonlinear distortions is also known as rubber sheeting. Figure 72 illustrates the effects of some nonlinear transformations.

Figure 72: Nonlinear Transformations

Second-order transformations can be used to convert Lat/Lon data to a planar projection, for data covering a large area (to account for the Earth’s curvature), and with distorted data (for example, due to camera lens distortion). Third-order transformations are used with distorted aerial photographs, on scans of warped maps and with radar imagery. Fourth-order transformations can be used on very distorted aerial photographs.The transformation matrix for a transformation of order t contains this number of coefficients:

original image

some possible outputs

Page 293: ERDAS Field Guide

Rectification 263

It is multiplied by two for the two sets of coefficients—one set for X, one for Y. An easier way to arrive at the same number is:

Clearly, the size of the transformation matrix increases with the order of the transformation.

Higher Order PolynomialsThe polynomial equations for a t-order transformation take this form:

Where:t is the order of the polynomialak and bk are coefficientsthe subscript k is determined by:

An example of 3rd-order transformation equations for X and Y, using numbers, is:

These equations use a total of 20 coefficients, or

2 ii 1=

t 1+

t 1+( ) t 2+( )×

xo

i o=⎝ ⎠⎜ ⎟⎜ ⎟⎛ ⎞ i

Σj o=⎝ ⎠

⎜ ⎟⎜ ⎟⎛ ⎞

= ak xi j–× yj×

yo

i o=⎝ ⎠⎜ ⎟⎜ ⎟⎛ ⎞ i

Σj o=⎝ ⎠

⎜ ⎟⎜ ⎟⎛ ⎞

= bk xi j–× yj×

k i i j+⋅2

--------------- j+=

xo 5 4x 6y– 10x2 5xy– 1y2 3x3 7x2y 11xy2– 4y3+ + + + + +=

yo 13 12x 4y 1x2 21xy– 11y2 1x3– 2x2y 5xy2 12y3+ + + + + + +=

Page 294: ERDAS Field Guide

264 Rectification

Effects of Order The computation and output of a higher-order polynomial equation are more complex than that of a lower-order polynomial equation. Therefore, higher-order polynomials are used to perform more complicated image rectifications. To understand the effects of different orders of transformation in image rectification, it is helpful to see the output of various orders of polynomials.The following example uses only one coordinate (X), instead of two (X,Y), which are used in the polynomials for rectification. This enables you to draw two-dimensional graphs that illustrate the way that higher orders of transformation affect the output image.

NOTE: Because only the X coordinate is used in these examples, the number of GCPs used is less than the number required to actually perform the different orders of transformation.

Coefficients like those presented in this example would generally be calculated by the least squares regression method. Suppose GCPs are entered with these X coordinates:

These GCPs allow a 1st-order transformation of the X coordinates, which is satisfied by this equation (the coefficients are in parentheses):

Where:xr = the reference X coordinatexi = the source X coordinate

This equation takes on the same format as the equation of a line (y = mx + b). In mathematical terms, a 1st-order polynomial is linear. Therefore, a 1st-order transformation is also known as a linear transformation. This equation is graphed in Figure 73.

3 1+( ) 3 2+( )×

Source X Coordinate (input) Reference X Coordinate (output)

1 17

2 9

3 1

xr 25( ) 8–( )+ xi=

Page 295: ERDAS Field Guide

Rectification 265

Figure 73: Transformation Example—1st-Order

However, what if the second GCP were changed as follows?

These points are plotted against each other in Figure 74.

Figure 74: Transformation Example—2nd GCP Changed

A line cannot connect these points, which illustrates that they cannot be expressed by a 1st-order polynomial, like the one above. In this case, a 2nd-order polynomial equation expresses these points:

Polynomials of the 2nd-order or higher are nonlinear. The graph of this curve is drawn in Figure 75.

Source X Coordinate (input) Reference X Coordinate (output)

1 17

2 7

3 1

0 1 2 3 4

0

4

8

12

16

refe

renc

e X

coo

rdin

ate

source X coordinate

xr = (25) + (-8)xi

0 1 2 3 4

0

4

8

12

16

refe

renc

e X

coo

rdin

ate

source X coordinate

xr 31( ) 16–( )xi 2( )xi2+ +=

Page 296: ERDAS Field Guide

266 Rectification

Figure 75: Transformation Example—2nd-Order

What if one more GCP were added to the list?

Figure 76: Transformation Example—4th GCP Added

As illustrated in Figure 76, this fourth GCP does not fit on the curve of the 2nd-order polynomial equation. To ensure that all of the GCPs fit, the order of the transformation could be increased to 3rd-order. The equation and graph in Figure 77 could then result.

Source X Coordinate (input) Reference X Coordinate (output)

1 17

2 7

3 1

4 5

0 1 2 3 4

0

4

8

12

16

refe

renc

e X

coo

rdin

ate

source X coordinate

xr = (31) + (-16)xi + (2)xi2

0 1 2 3 4

0

4

8

12

16

refe

renc

e X

coo

rdin

ate

source X coordinate

xr = (31) + (-16)xi + (2)xi2

(4,5)

Page 297: ERDAS Field Guide

Rectification 267

Figure 77: Transformation Example—3rd-Order

Figure 77 illustrates a 3rd-order transformation. However, this equation may be unnecessarily complex. Performing a coordinate transformation with this equation may cause unwanted distortions in the output image for the sake of a perfect fit for all the GCPs. In this example, a 3rd-order transformation probably would be too high, because the output pixels would be arranged in a different order than the input pixels, in the X direction.

Figure 78: Transformation Example—Effect of a 3rd-Order Transformation

In this case, a higher order of transformation would probably not produce the desired results.

Source X Coordinate (input) Reference X Coordinate (output)

1 xo(1) = 17

2 xo(2) = 7

3 xo(3) = 1

4 xo(4) = 5

0 1 2 3 4

0

4

8

12

16

refe

renc

e X

coo

rdin

ate

source X coordinate

xr = (25) + (-5)xi + (-4)xi2 + (1)xi

3

xo 1( ) xo 2( ) xo 4( ) xo 3( )> > >

17 7 5 1> > >

1 2 3 4 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 41 2 3 4 3 4 2 1

input imageX coordinates

output imageX coordinates

Page 298: ERDAS Field Guide

268 Rectification

Minimum Number of GCPs

Higher orders of transformation can be used to correct more complicated types of distortion. However, to use a higher order of transformation, more GCPs are needed. For instance, three points define a plane. Therefore, to perform a 1st-order transformation, which is expressed by the equation of a plane, at least three GCPs are needed. Similarly, the equation used in a 2nd-order transformation is the equation of a paraboloid. Six points are required to define a paraboloid. Therefore, at least six GCPs are required to perform a 2nd-order transformation. The minimum number of points required to perform a transformation of order t equals:

Use more than the minimum number of GCPs whenever possible. Although it is possible to get a perfect fit, it is rare, no matter how many GCPs are used.For 1st- through 10th-order transformations, the minimum number of GCPs required to perform a transformation is listed in the following table:

For the best rectification results, you should always use more than the minimum number of GCPs, and they should be well-distributed.

Order of Transformation Minimum GCPs Required

1 3

2 6

3 10

4 15

5 21

6 28

7 36

8 45

9 55

10 66

t 1+( ) t 2+( )( )2

-------------------------------------

Page 299: ERDAS Field Guide

Rectification 269

Rubber Sheeting

Triangle-Based Finite Element Analysis

The finite element analysis is a powerful tool for solving complicated computation problems which can be approached by small simpler pieces. It has been widely used as a local interpolation technique in geographic applications. For image rectification, the known control points can be triangulated into many triangles. Each triangle has three control points as its vertices. Then, the polynomial transformation can be used to establish mathematical relationships between source and destination systems for each triangle. Because the transformation exactly passes through each control point and is not in a uniform manner, finite element analysis is also called rubber sheeting. It can also be called as the triangle-based rectification because the transformation and resampling for image rectification are performed on a triangle-by-triangle basis.This triangle-based technique should be used when other rectification methods such as polynomial transformation and photogrammetric modeling cannot produce acceptable results.

Triangulation To perform the triangle-based rectification, it is necessary to triangulate the control points into a mesh of triangles. Watson (Watson, 1992) summarily listed four kinds of triangulation, including the arbitrary, optimal, Greedy and Delaunay triangulation. Of the four kinds, the Delaunay triangulation is most widely used and is adopted because of the smaller angle variations of the resulting triangles.The Delaunay triangulation can be constructed by the empty circumcircle criterion. The circumcircle formed from three points of any triangle does not have any other point inside. The triangles defined this way are the most equiangular possible.Figure 79 shows an example of the triangle network formed by 13 control points.

Figure 79: Triangle Network

p0

p3

p8p10

p11

p4

p1

p12

p7p6

p2

p5

p9

Page 300: ERDAS Field Guide

270 Rectification

Triangle-based rectification

Once the triangle mesh has been generated and the spatial order of the control points is available, the geometric rectification can be done on a triangle-by-triangle basis. This triangle-based method is appealing because it breaks the entire region into smaller subsets. If the geometric problem of the entire region is very complicated, the geometry of each subset can be much simpler and modeled through simple transformation.For each triangle, the polynomials can be used as the general transformation form between source and destination systems.

Linear transformation The easiest and fastest is the linear transformation with the first order polynomials:

There is no need for extra information because there are three known conditions in each triangle and three unknown coefficients for each polynomial.

Nonlinear transformation Even though the linear transformation is easy and fast, it has one disadvantage. The transitions between triangles are not always smooth. This phenomenon is obvious when shaded relief or contour lines are derived from the DEM which is generated by the linear rubber sheeting. It is caused by incorporating the slope change of the control data at the triangle edges and vertices. In order to distribute the slope change smoothly across triangles, the nonlinear transformation with polynomial order larger than one is used by considering the gradient information.The fifth order or quintic polynomial transformation is chosen here as the nonlinear rubber sheeting technique in this dissertation. It is a smooth function. The transformation function and its first order partial derivative are continuous. It is not difficult to construct (Akima, 1978). The formula is as follows:

It has 21 coefficients for each polynomial to be determined. For solving these unknowns, 21 conditions should be available. For each vertex of the triangle, one point value is given, and two first order and three second order partial derivatives can be easily derived by establishing a second order polynomial using vertices in the neighborhood of the vertex. Then the total 18 conditions are ready to be used. Three more conditions can be obtained by assuming that the normal partial derivative on each edge of the triangle is a cubic polynomial, which means that the sum of the polynomial items beyond the third order in the normal partial derivative has a value zero.

xo a0 a1x a2y+ +=

yo b0 b1x b2y+ +=⎩⎨⎧

Page 301: ERDAS Field Guide

Rectification 271

Check Point Analysis It should be emphasized that the independent check point analysis is critical for determining the accuracy of rubber sheeting modeling. For an exact modeling method like rubber sheeting, the ground control points, which are used in the modeling process, do not have much geometric residuals remaining. To evaluate the geometric transformation between source and destination coordinate systems, the accuracy assessment using independent check points is recommended.

RMS Error RMS error is the distance between the input (source) location of a GCP and the retransformed location for the same GCP. In other words, it is the difference between the desired output coordinate for a GCP and the actual output coordinate for the same point, when the point is transformed with the geometric transformation. RMS error is calculated with a distance equation:

Where:xi and yi are the input source coordinatesxr and yr are the retransformed coordinates

RMS error is expressed as a distance in the source coordinate system. If data file coordinates are the source coordinates, then the RMS error is a distance in pixel widths. For example, an RMS error of 2 means that the reference pixel is 2 pixels away from the retransformed pixel.

Residuals and RMS Error Per GCP

The GCP Tool contains columns for the X and Y residuals. Residuals are the distances between the source and retransformed coordinates in one direction. They are shown for each GCP. The X residual is the distance between the source X coordinate and the retransformed X coordinate. The Y residual is the distance between the source Y coordinate and the retransformed Y coordinate.

xo ak xi j– yj⋅ ⋅

j 0=

i

∑i 0=

5

∑=

yo bk xi j– yj⋅ ⋅

j 0=

i

∑i 0=

5

∑=⎩⎪⎪⎪⎨⎪⎪⎪⎧

RMS error xr xi–( )2 yr yi–( )2+=

Page 302: ERDAS Field Guide

272 Rectification

If the GCPs are consistently off in either the X or the Y direction, more points should be added in that direction. This is a common problem in off-nadir data.

RMS Error Per GCPThe RMS error of each point is reported to help you evaluate the GCPs. This is calculated with a distance formula:

Where:Ri = the RMS error for GCPiXRi = the X residual for GCPiYRi = the Y residual for GCPi

Figure 80 illustrates the relationship between the residuals and the RMS error per point.

Figure 80: Residuals and RMS Error Per Point

Total RMS Error From the residuals, the following calculations are made to determine the total RMS error, the X RMS error, and the Y RMS error:

Ri XRi2 YRi

2+=

source GCP

retransformed GCP

RMS error

X residual

Y residual

Rx1n--- XRi

2

i 1=

n

∑=

Ry1n--- YRi

2

i 1=

n

∑=

T Rx2 Ry

2+= or 1n--- XRi

2 YRi2+

i 1=

n

Page 303: ERDAS Field Guide

Rectification 273

Where:Rx = X RMS errorRy = Y RMS errorT = total RMS errorn = the number of GCPsi = GCP numberXRi = the X residual for GCPiYRi = the Y residual for GCPi

Error Contribution by Point

A normalized value representing each point’s RMS error in relation to the total RMS error is also reported. This value is listed in the Contribution column of the GCP Tool.

Where:Ei = error contribution of GCPiRi = the RMS error for GCPiT = total RMS error

Tolerance of RMS Error In most cases, it is advantageous to tolerate a certain amount of error rather than take a more complex transformation. The amount of RMS error that is tolerated can be thought of as a window around each source coordinate, inside which a retransformed coordinate is considered to be correct (that is, close enough to use). For example, if the RMS error tolerance is 2, then the retransformed pixel can be 2 pixels away from the source pixel and still be considered accurate.

Figure 81: RMS Error Tolerance

EiRiT-----=

2 pixel RMS error tolerance(radius)

sourcepixel

Retransformed coordinateswithin this range are consideredcorrect

Page 304: ERDAS Field Guide

274 Rectification

Acceptable RMS error is determined by the end use of the data base, the type of data being used, and the accuracy of the GCPs and ancillary data being used. For example, GCPs acquired from GPS should have an accuracy of about 10 m, but GCPs from 1:24,000-scale maps should have an accuracy of about 20 m. It is important to remember that RMS error is reported in pixels. Therefore, if you are rectifying Landsat TM data and want the rectification to be accurate to within 30 meters, the RMS error should not exceed 1.00. Acceptable accuracy depends on the image area and the particular project.

Evaluating RMS Error To determine the order of polynomial transformation, you can assess the relative distortion in going from image to map or map to map. One should start with a 1st-order transformation unless it is known that it does not work. It is possible to repeatedly compute transformation matrices until an acceptable RMS error is reached.

Most rectifications are either 1st-order or 2nd-order. The danger of using higher order rectifications is that the more complicated the equation for the transformation, the less regular and predictable the results are. To fit all of the GCPs, there may be very high distortion in the image.

After each computation of a transformation and RMS error, there are four options:

• Throw out the GCP with the highest RMS error, assuming that this GCP is the least accurate. Another transformation can then be computed from the remaining GCPs. A closer fit should be possible. However, if this is the only GCP in a particular region of the image, it may cause greater error to remove it.

• Tolerate a higher amount of RMS error.

• Increase the complexity of transformation, creating more complex geometric alterations in the image. A transformation can then be computed that can accommodate the GCPs with less error.

• Select only the points for which you have the most confidence.

Resampling Methods

The next step in the rectification/registration process is to create the output file. Since the grid of pixels in the source image rarely matches the grid for the reference image, the pixels are resampled so that new data file values for the output file can be calculated.

Page 305: ERDAS Field Guide

Rectification 275

Figure 82: Resampling

The following resampling methods are supported in ERDAS IMAGINE:

• Nearest Neighbor on page 276—uses the value of the closest pixel to assign to the output pixel value.

• Bilinear Interpolation on page 277—uses the data file values of four pixels in a 2 × 2 window to calculate an output value with a bilinear function.

• Cubic Convolution on page 281—uses the data file values of sixteen pixels in a 4 × 4 window to calculate an output value with a cubic function.

• Bicubic Spline Interpolation on page 284—fits a cubic spline surface through the current block of points.

In all methods, the number of rows and columns of pixels in the output is calculated from the dimensions of the output map, which is determined by the geometric transformation and the cell size. The output corners (upper left and lower right) of the output file can be specified. The default values are calculated so that the entire source file is resampled to the destination file. If an image to image rectification is being performed, it may be beneficial to specify the output corners relative to the reference file system, so that the images are coregistered. In this case, the upper left X and upper left Y coordinate are 0,0 and not the defaults.

1. The input image withsource GCPs.

2. The output grid, withreference GCPs shown.

3. To compare the two grids, theinput image is laid over theoutput grid, so that the GCPsof the two grids fit together.

4. Using a resampling method,the pixel values of the inputimage are assigned to pixelsin the output grid.

GCP

GCP

GCP

GCP

Page 306: ERDAS Field Guide

276 Rectification

If the output units are pixels, then the origin of the image is the upper left corner. Otherwise, the origin is the lower left corner.

Rectifying to Lat/Lon You can specify the nominal cell size if the output coordinate system is Lat/Lon. The output cell size for a geographic projection (that is, Lat/Lon) is always in angular units of decimal degrees. However, if you want the cell to be a specific size in meters, you can enter meters and calculate the equivalent size in decimal degrees. For example, if you want the output file cell size to be 30 × 30 meters, then the program would calculate what this size would be in decimal degrees and automatically update the output cell size. Since the transformation between angular (decimal degrees) and nominal (meters) measurements varies across the image, the transformation is based on the center of the output file.

Enter the nominal cell size in the Nominal Cell Size dialog.

Nearest Neighbor To determine an output pixel’s nearest neighbor, the rectified coordinates (xo, yo) of the pixel are retransformed back to the source coordinate system using the inverse of the transformation. The retransformed coordinates (xr, yr) are used in bilinear interpolation and cubic convolution as well. The pixel that is closest to the retransformed coordinates (xr, yr) is the nearest neighbor. The data file value(s) for that pixel become the data file value(s) of the pixel in the output image.

Figure 83: Nearest Neighbor

(xr,yr)

nearest to(xr,yr)

Page 307: ERDAS Field Guide

Rectification 277

Bilinear Interpolation In bilinear interpolation, the data file value of the rectified pixel is based upon the distances between the retransformed coordinate location (xr, yr) and the four closest pixels in the input (source) image (see Figure 84). In this example, the neighbor pixels are numbered 1, 2, 3, and 4. Given the data file values of these four pixels on a grid, the task is to calculate a data file value for r (Vr).

Advantages Disadvantages

Transfers original data values without averaging them as the other methods do; therefore, the extremes and subtleties of the data values are not lost. This is an important consideration when discriminating between vegetation types, locating an edge associated with a lineament, or determining different levels of turbidity or temperatures in a lake (Jensen, 1996).

When this method is used to resample from a larger to a smaller grid size, there is usually a stair stepped effect around diagonal lines and curves.

Suitable for use before classification.

Data values may be dropped, while other values may be duplicated.

The easiest of the three methods to compute and the fastest to use.

Using on linear thematic data (for example, roads, streams) may result in breaks or gaps in a network of linear data.

Appropriate for thematic files, which can have data file values based on a qualitative (nominal or ordinal) system or a quantitative (interval or ratio) system. The averaging that is performed with bilinear interpolation and cubic convolution is not suited to a qualitative class value system.

Page 308: ERDAS Field Guide

278 Rectification

Figure 84: Bilinear Interpolation

To calculate Vr, first Vm and Vn are considered. By interpolating Vm and Vn, you can perform linear interpolation, which is a simple process to illustrate. If the data file values are plotted in a graph relative to their distances from one another, then a visual linear interpolation is apparent. The data file value of m (Vm) is a function of the change in the data file value between pixels 3 and 1 (that is, V3 - V1).

Figure 85: Linear Interpolation

The equation for calculating Vm from V1 and V3 is:

(xr,yr)

1 2

3 4

m r n

dy

dx

D

r is the location of the retransformed coordinate

V3

Vm

V1

(V3 - V1) / D

Y1 Ym Y3

Ddata file coordinates

data

file

val

ues

(Y)

Calculating a data file value as a functionof spatial distance between two pixels

VmV3 V1–

D------------------ dy V1+×=

Page 309: ERDAS Field Guide

Rectification 279

Where:Yi = the Y coordinate for pixel iVi = the data file value for pixel idy = the distance between Y1 and Ym in the source coordinate

systemD = the distance between Y1 and Y3 in the source coordinate

system

If one considers that (V3 - V1 / D) is the slope of the line in the graph above, then this equation translates to the equation of a line in y = mx + b form. Similarly, the equation for calculating the data file value for n (Vn) in the pixel grid is:

From Vn and Vm, the data file value for r, which is at the retransformed coordinate location (xr,yr),can be calculated in the same manner:

The following is attained by plugging in the equations for Vm and Vn to this final equation for Vr :

VnV4 V2–

D------------------ dy V2+×=

VrVn Vm–

D------------------- dx Vm+×=

V4 V2–D

------------------ dy V2+×V3 V1–

D------------------ dy V1+×–

D------------------------------------------------------------------------------------------------------- dx

V3 V1–D

------------------ dy +×+×=

Page 310: ERDAS Field Guide

280 Rectification

In most cases D = 1, since data file coordinates are used as the source coordinates and data file coordinates increment by 1.Some equations for bilinear interpolation express the output data file value as:

Where:wi is a weighting factor

The equation above could be expressed in a similar format, in which the calculation of wi is apparent:

Where:Δxi = the change in the X direction between (xr,yr) and the data

file coordinate of pixel iΔyi = the change in the Y direction between (xr,yr) and the data

file coordinate of pixel iVi = the data file value for pixel iD = the distance between pixels (in X or Y) in the source

coordinate systemFor each of the four pixels, the data file value is weighted more if the pixel is closer to (xr, yr).

VrV1 D dx–( ) D dy–( ) V2 dx( ) D dy–( ) V3 D dx–( ) dy( ) V4 dx( ) dy( )+ + +

D2--------------------------------------------------------------------------------------------------------------------------------------------------------------------------=

Vr wiVi∑=

VrD xiΔ–( ) D yiΔ–( )

D2---------------------------------------------- Vi×

i 1=

4

∑=

Page 311: ERDAS Field Guide

Rectification 281

See "Enhancement" on page 455 for more about convolution filtering.

Cubic Convolution Cubic convolution is similar to bilinear interpolation, except that:

• a set of 16 pixels, in a 4 × 4 array, are averaged to determine the output data file value, and

• an approximation of a cubic function, rather than a linear function, is applied to those 16 input values.

To identify the 16 pixels in relation to the retransformed coordinate (xr,yr), the pixel (i,j) is used, such that:

i = int (xr)j = int (yr)

This assumes that (xr,yr) is expressed in data file coordinates (pixels). The pixels around (i,j) make up a 4 × 4 grid of input pixels, as illustrated in Figure 86.

Advantages Disadvantages

Results in output images that are smoother, without the stair-stepped effect that is possible with nearest neighbor.

Since pixels are averaged, bilinear interpolation has the effect of a low-frequency convolution. Edges are smoothed, and some extremes of the data file values are lost.

More spatially accurate than nearest neighbor.

This method is often used when changing the cell size of the data, such as in SPOT/TM merges within the 2 × 2 resampling matrix limit.

Page 312: ERDAS Field Guide

282 Rectification

Figure 86: Cubic Convolution

Since a cubic, rather than a linear, function is used to weight the 16 input pixels, the pixels farther from (xr, yr) have exponentially less weight than those closer to (xr, yr).

Several versions of the cubic convolution equation are used in the field. Different equations have different effects upon the output data file values. Some convolutions may have more of the effect of a low-frequency filter (like bilinear interpolation), serving to average and smooth the values. Others may tend to sharpen the image, like a high-frequency filter. The cubic convolution used in ERDAS IMAGINE is a compromise between low-frequency and high-frequency. The general effect of the cubic convolution depends upon the data.The formula used in ERDAS IMAGINE is:

(i,j)

(Xr,Yr)

Vr V i 1– j n 2–+,( ) f d i 1– j n 2–+,( ) 1+( )×

V i j n 2–+,( ) f d i j n 2–+,( )( )×

V i 1+ j n 2–+,( ) f d i 1+ j n 2–+,( ) 1–( )×

+

+

+

n 1=

4

∑=

Page 313: ERDAS Field Guide

Rectification 283

Where:i = int (xr)j = int (yr)d(i,j) = the distance between a pixel with coordinates (i,j) and

(xr,yr)V(i,j) = the data file value of pixel (i,j)Vr = the output data file valuea = -1 (a constant)f(x) = the following function:

Source: Atkinson, 1985

Advantages Disadvantages

Uses 4 × 4 resampling. In most cases, the mean and standard deviation of the output pixels match the mean and standard deviation of the input pixels more closely than any other resampling method.

Data values may be altered.

The effect of the cubic curve weighting can both sharpen the image and smooth out noise (Atkinson, 1985). The actual effects depend upon the data being used.

This method is extremely slow.

This method is recommended when you are dramatically changing the cell size of the data, such as in TM/aerial photo merges (that is, matches the 4 × 4 window more closely than the 2 × 2 window).

f x( )a 2+( ) x 3 a 3+( ) x 2– 1+

a x 3 5a x 2– 8a x 4a–+0⎝

⎜⎜⎜⎛

=if x 1<

if 1 x 2< <otherwise

Page 314: ERDAS Field Guide

284 Rectification

Bicubic Spline Interpolation

Bicubic Spline Interpolation is based on fitting a cubic spline surface through the current block of points. The output value is derived from the fitting surface that will retain the values of the known points. This algorithm is much slower than other methods of interpolation, but it has the advantage of giving a more exact fit to the curve without the oscillations that other interpolation methods can create. Bicubic Spline Interpolation is so similar to Bilinear Interpolation that unless you have the need to maximize surface smoothness, you should use Bilinear Interpolation.

Data PointsThe known data points are an array of raster of m × n,

Where:1 < i < m1 < j < nd is the cell size of the rasterVi,j is the cell value in (xi ,yj)

EquationsA bicubic polynomial function V(x,y) is constructed as following:

in each cell

x1 x2 xm-1 xm

y1

y2

yn-1

yn

V1,1 V2,1 Vm-1,1 Vm,1

V1,2

V1,n-1

V1,n

V2,2

V2,n-1

V2,n

Vm-1,2

Vm-1,n

Vm-1,n-1

Vm,2

Vm,n-1

Vm,n

Xi 1+ Xi d+=

Yj 1+ Yj d+=

Page 315: ERDAS Field Guide

Rectification 285

• The functions and their first and second derivatives must be continuous across the interval and equal at the endpoints and the fourth derivatives of the equations should be zero.

• The function satisfies the conditions

that is, the spline must interpolate all data points.

• Coefficients can be obtained by resolving the known points together with the selection of the boundary condition type. Please refer to Shikin and Plis (Shikin and Plis, 1995) for the boundary conditions and the mathematical details for solving the equations. IMAGINE uses the first type of boundary condition. Because in IMAGINE the input raster grid has been expanded two cells around the boundary, the boundary condition has no significant effects on the resampling.

Calculate value for unknown pointThe value for point (xr, yr) can be calculated by the following formula:

V x y,( ) ap q,i j,( ) x xi–( )p y yj–( )q

q 0=

3

∑p 0=

3

∑=

Rij x y,( ) xi x xi 1+ yj y yj 1+ }≤ ≤,≤ ≤{=

i 1 2 … m j;, , , 1 2 … n, , ,= =

V xi yj,( ) Vi j, i; 1 2 … m j;, , , 1 2 … n, , ,= = =

V xr yr,( ) ap q,ir jr,( )

xr xir–( )p yr yjr

–( )q

q 0=

3

∑p 0=

3

∑=

Page 316: ERDAS Field Guide

286 Rectification

The value is determined by 16 coefficients. Because the coefficients are resolved by using all other known points, all other points contribute to the value. The nearer points contribute more whereas the farther points contribute less.

Source: Shikin and Plis, 1995

Map-to-Map Coordinate Conversions

There are many instances when you may need to change a map that is already registered to a planar projection to another projection. Some examples of when this is required are as follows (Environmental Systems Research Institute, 1992):

• When combining two maps with different projection characteristics.

• When the projection used for the files in the data base does not produce the desired properties of a map.

• When it is necessary to combine data from more than one zone of a projection, such as UTM or State Plane.

A change in the projection is a geometric change—distances, areas, and scale are represented differently. Therefore, the conversion process requires that pixels be resampled.

Advantages Disadvantages

Results in the smoothest output images.

The most computationally intensive resampling method, and is therefore the slowest.

More spatially accurate than nearest neighbor.

This method is often used when upsampling.

(ir, jr)

(xr ,yr)yjr

xir

d

d

Page 317: ERDAS Field Guide

Rectification 287

Resampling causes some of the spectral integrity of the data to be lost (see the disadvantages of the resampling methods explained previously). So, it is not usually wise to resample data that have already been resampled if the accuracy of data file values is important to the application. If the original unrectified data are available, it is usually wiser to rectify that data to a second map projection system than to lose a generation by converting rectified data and resampling it a second time.

Conversion Process To convert the map coordinate system of any georeferenced image, ERDAS IMAGINE provides a shortcut to the rectification process. In this procedure, GCPs are generated automatically along the intersections of a grid that you specify. The program calculates the reference coordinates for the GCPs with the appropriate conversion formula and a transformation that can be used in the regular rectification process.

Vector Data Converting the map coordinates of vector data is much easier than converting raster data. Since vector data are stored by the coordinates of nodes, each coordinate is simply converted using the appropriate conversion formula. There are no coordinates between nodes to extrapolate.

Page 318: ERDAS Field Guide

288 Rectification

Page 319: ERDAS Field Guide

289Hardcopy Output

Hardcopy Output 289

Hardcopy Output

Introduction Hardcopy output refers to any output of image data to paper. These topics are covered in this chapter:

• printing maps

• the mechanics of printing

For additional information, see the chapter about "Windows Printing" on page 65 in the ERDAS IMAGINE Configuration Guide.

Printing Maps ERDAS IMAGINE enables you to create and output a variety of types of hardcopy maps, with several referencing features.

Scaled Maps A scaled map is a georeferenced map that has been projected to a map projection, and is accurately laid-out and referenced to represent distances and locations. A scaled map usually has a legend, that includes a scale, such as 1 inch = 1000 feet. The scale is often expressed as a ratio, like 1:12,000, where 1 inch on the map represents 12,000 inches on the ground.

See "Rectification" on page 251 for information on rectifying and georeferencing images and "Cartography" on page 211 for information on creating maps.

Printing Large Maps Some scaled maps do not fit on the paper that is used by the printer. These methods are used to print and store large maps:

• A book map is laid out like the pages of a book. Each page fits on the paper used by the printer. There is a border, but no tick marks on every page.

• A paneled map is designed to be spliced together into a large paper map; therefore, borders and tick marks appear on the outer edges of the large map.

Page 320: ERDAS Field Guide

290 Hardcopy Output

Figure 87: Layout for a Book Map and a Paneled Map

Scale and Resolution The following scales and resolutions are noticeable during the process of creating a map composition and sending the composition to a hardcopy device:

• spatial resolution of the image

• display scale of the map composition

• map scale of the image(s)

• map composition to paper scale

• device resolution

Spatial ResolutionSpatial resolution is the area on the ground represented by each raw image data pixel.

Display ScaleDisplay scale is the distance on the screen as related to one unit on paper. For example, if the map composition is 24 inches by 36 inches, it would not be possible to view the entire composition on the screen. Therefore, the scale could be set to 1:0.25 so that the entire map composition would be in view.

neatline

map composition

neatline

map composition

++

++

++

++

tick marks

Book Map Paneled Map

Page 321: ERDAS Field Guide

Hardcopy Output 291

Map ScaleThe map scale is the distance on a map as related to the true distance on the ground, or the area that one pixel represents measured in map units. The map scale is defined when you create an image area in the map composition. One map composition can have multiple image areas set at different scales. These areas may need to be shown at different scales for different applications.

Map Composition to Paper ScaleThis scale is the original size of the map composition as related to the desired output size on paper.

Device ResolutionThe number of dots that are printed per unit—for example, 300 dots per inch (DPI).

Use the ERDAS IMAGINE Map Composer to define the above scales and resolutions.

Map Scaling Examples The ERDAS IMAGINE Map Composer enables you to define a map size, as well as the size and scale for the image area within the map composition. The examples in this section focus on the relationship between these factors and the output file created by Map Composer for the specific hardcopy device or file format. Figure 88 is the map composition that is used in the examples. This composition was originally created using the ERDAS IMAGINE Map Composer at a size of 22” × 34”, and the hardcopy output must be in two different formats.

• It must be output to a PostScript printer on an 8.5” × 11” piece of paper.

• A TIFF file must be created and sent to a film recorder having a 1,000 dpi resolution.

Page 322: ERDAS Field Guide

292 Hardcopy Output

Figure 88: Sample Map Composition

Output to PostScript PrinterSince the map was created at 22” × 34”, the map composition to paper scale needs to be calculated so that the composition fits on an 8.5” × 11” piece of paper. If this scale is set for a one to one ratio, then the composition is paneled. To determine the map composition to paper scale factor, it is necessary to calculate the most limiting direction. Since the printable area for the printer is approximately 8.1” × 8.6”, these numbers are used in the calculation.

• 8.1” / 22” = 0.36 (horizontal direction)

• 8.6” / 34” = 0.23 (vertical direction)

The vertical direction is the most limiting; therefore, the map composition to paper scale would be set for 0.23.

If the specified size of the map (width and height) is greater than the printable area for the printer, the output hardcopy map is paneled. See the hardware manual of the hardcopy device for information about the printable area of the device.

Page 323: ERDAS Field Guide

Hardcopy Output 293

Use the Print Map Composition dialog to output a map composition to a PostScript printer.

Output to TIFFThe limiting factor in this example is not page size, but disk space (600 MB total). A three-band image file must be created in order to convert the map composition to .tif file. Due to the three bands and the high resolution, the image file could be very large. The .tif file is output to a film recorder with a 1,000 DPI device resolution.To determine the number of megabytes for the map composition, the X and Y dimensions need to be calculated:

• X = 22 inches × 1,000 dots/inch = 22,000

• Y = 34 × 1,000 = 34,000

• 22,000 × 34,000 × 3 = 2244 MB (multiplied by 3 since there are 3 bands)

Although this appears to be an unmanageable file size, it is possible to reduce the file size with little image degradation. The image file created from the map composition must be less than half to accommodate the .tif file, because the total disk space is only 600 megabytes. Dividing the map composition by three in both X and Y directions (2,244 MB / 3 /3) results in approximately a 250 megabyte file. This file size is small enough to process and leaves enough room for the image to TIFF conversion. This division is accomplished by specifying a 1/3 or 0.333 map composition to paper scale when outputting the map composition to an image file. Once the image file is created and exported to TIFF format, it can be sent to a film recorder that accepts .tif files. Remember, the file must be enlarged three times to compensate for the reduction during the image file creation.

See the hardware manual of the hardcopy device for information about the DPI device resolution.

Use the ERDAS IMAGINE Print Map Composition dialog to output a map composition to an image file.

Page 324: ERDAS Field Guide

294 Hardcopy Output

Mechanics of Printing

This section describes the mechanics of transferring an image or map composition from a data file to a hardcopy map.

Halftone Printing Halftoning is the process of converting a continuous tone image into a pattern of dots. A newspaper photograph is a common example of halftoning.To make a color illustration, halftones in the primary colors (cyan, magenta, and yellow), plus black, are overlaid. The halftone dots of different colors, in close proximity, create the effect of blended colors in much the same way that phosphorescent dots on a color computer monitor combine red, green, and blue to create other colors. By using different patterns of dots, colors can have different intensities. The dots for halftoning are a fixed density—either a dot is there or it is not there.For scaled maps, each output pixel may contain one or more dot patterns. If a very large image file is being printed onto a small piece of paper, data file pixels are skipped to accommodate the reduction.

Hardcopy DevicesThe following hardcopy devices use halftoning to output an image or map composition:

• Tektronix Inkjet Printer

• Tektronix Phaser Printer

See the user’s manual for the hardcopy device for more information about halftone printing.

Continuous Tone Printing

Continuous tone printing enables you to output color imagery using the four process colors (cyan, magenta, yellow, and black). By using varying percentages of these colors, it is possible to create a wide range of colors. The printer converts digital data from the host computer into a continuous tone image. The quality of the output picture is similar to a photograph. The output is smoother than halftoning because the dots for continuous tone printing can vary in density.

Page 325: ERDAS Field Guide

Hardcopy Output 295

Example There are different processes by which continuous tone printers generate a map. One example is a process called thermal dye transfer. The entire image or map composition is loaded into the printer’s memory. While the paper moves through the printer, heat is used to transfer the dye from a ribbon, which has the dyes for all of the four process colors, to the paper. The density of the dot depends on the amount of heat applied by the printer to transfer the dye. The amount of heat applied is determined by the brightness values of the input image. This allows the printer to control the amount of dye that is transferred to the paper to create a continuous tone image.

Hardcopy DevicesThe following hardcopy device uses continuous toning to output an image or map composition:

• Tektronix Phaser II SD

NOTE: The above printers do not necessarily use the thermal dye transfer process to generate a map.

See the user’s manual for the hardcopy device for more information about continuous tone printing.

Contrast and Color Tables

ERDAS IMAGINE contrast and color tables are used for some printing processes, just as they are used in displaying an image. For continuous raster layers, they are loaded from the ERDAS IMAGINE contrast table. For thematic layers, they are loaded from the color table. The translation of data file values to brightness values is performed entirely by the software program.

RGB to CMY Conversion

ColorsSince a printer uses ink instead of light to create a visual image, the primary colors of pigment (cyan, magenta, and yellow) are used in printing, instead of the primary colors of light (red, green, and blue). Cyan, magenta, and yellow can be combined to make black through a subtractive process, whereas the primary colors of light are additive—red, green, and blue combine to make white (Gonzalez and Wintz, 1977). The data file values that are sent to the printer and the contrast and color tables that accompany the data file are all in the RGB color scheme. The RGB brightness values in the contrast and color tables must be converted to cyan, magenta, and yellow (CMY) values.

Page 326: ERDAS Field Guide

296 Hardcopy Output

The RGB primary colors are the opposites of the CMY colors—meaning, for example, that the presence of cyan in a color means an equal lack of red. To convert the values, each RGB brightness value is subtracted from the maximum brightness value to produce the brightness value for the opposite color. The following equation shows this relationship:

C = MAX - R M = MAX - G Y = MAX - B

Where:MAX = the maximum brightness valueR = red value from lookup table G = green value from lookup tableB = blue value from lookup table C = calculated cyan value M = calculated magenta value Y = calculated yellow value

Black InkAlthough, theoretically, cyan, magenta, and yellow combine to create black ink, the color that results is often a dark, muddy brown. Many printers also use black ink for a truer black.

NOTE: Black ink may not be available on all printers. Consult the user’s manual for your printer.

Images often appear darker when printed than they do when displayed on the display device. Therefore, it may be beneficial to improve the contrast and brightness of an image before it is printed.

Use the programs discussed in "Enhancement" on page 455 to brighten or enhance an image before it is printed.

Page 327: ERDAS Field Guide

297Map Projections

Map Projections 297

Map Projections

Introduction This appendix is an alphabetical listing of the map projections supported in ERDAS IMAGINE. It is divided into two sections:

• USGS Projections, and

• External Projections

The external projections were implemented outside of ERDAS IMAGINE so that you could add to these using the IMAGINE Developers’ Toolkit. The projections in each section are presented in alphabetical order.The information in this appendix is adapted from:

• Map Projections for Use with the Geographic Information System (Lee and Walsh, 1984)

• Map Projections—A Working Manual (Snyder, 1987)

• ArcInfo HELP (Environmental Systems Research Institute, 1997)

Other sources are noted in the text.

For general information about map projection types, refer to "Cartography" on page 211.

Rectify an image to a particular map projection using the ERDAS IMAGINE Rectification tools. View, add, or change projection information using the Image Information option.

NOTE: You cannot rectify to a new map projection using the Image Information option. You should change map projection information using Image Information only if you know the information to be incorrect. Use the rectification tools to actually georeference an image to a new map projection system.

Page 328: ERDAS Field Guide

298 Map Projections

USGS Projections The following USGS map projections are supported in ERDAS IMAGINE and are described in this section:

• Alaska Conformal

• Albers Conical Equal Area

• Azimuthal Equidistant

• Behrmann

• Bonne

• Cassini

• Cylindrical Equal Area

• Double Stereographic

• Eckert I

• Eckert II

• Eckert III

• Eckert IV

• Eckert V

• Eckert VI

• EOSAT SOM

• EPSG Coordinate Systems

• Equidistant Conic

• Equidistant Cylindrical

• Equirectangular (Plate Carrée)

• Gall Stereographic

• Gauss Kruger

• General Vertical Near-side Perspective

• Geographic (Lat/Lon)

Page 329: ERDAS Field Guide

Map Projections 299

• Gnomonic

• Hammer

• Interrupted Goode Homolosine

• Interrupted Mollweide

• Krovak

• Lambert Azimuthal Equal Area

• Lambert Conformal Conic

• Lambert Conic Conformal (1 Standard Parallel)

• Loximuthal

• Mercator

• Miller Cylindrical

• Military Grid Reference System (MGRS)

• Modified Transverse Mercator

• Mollweide

• New Zealand Map Grid

• Oblated Equal Area

• Oblique Mercator (Hotine)

• Orthographic

• Plate Carrée

• Polar Stereographic

• Polyconic

• Quartic Authalic

• Robinson

• RSO

• Sinusoidal

Page 330: ERDAS Field Guide

300 Map Projections

• Space Oblique Mercator

• Space Oblique Mercator (Formats A & B)

• State Plane

• Stereographic

• Stereographic (Extended)

• Transverse Mercator

• Two Point Equidistant

• UTM

• Van der Grinten I

• Wagner IV

• Wagner VII

• Winkel I

Page 331: ERDAS Field Guide

Map Projections 301

Alaska Conformal Use of this projection results in a conformal map of Alaska. It has little scale distortion as compared to other conformal projections. The method of projection is “modified planar. [It is] a sixth-order-equation modification of an oblique Stereographic conformal projection on the Clarke 1866 spheroid. The origin is at 64° N, 152° W” (Environmental Systems Research Institute, 1997).

Source: Environmental Systems Research Institute, 1997

PromptsThe following prompts display in the Projection Chooser once Alaska Conformal is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

False easting

False northing

Construction Modified planar

Property Conformal

Meridians N/A

Parallels N/A

Graticule spacing

N/A

Linear scale

The minimum scale factor is 0.997 at roughly 62.5° N, 156° W. Scale increases outward from these coordinates. Most of Alaska and the Aleutian Islands (with the exception of the panhandle) is bounded by a line of true scale. The scale factor for Alaska is from 0.997 to 1.003. That is one quarter the range for a corresponding conic projection (Snyder, 1987).

Uses

This projection is useful for mapping the complete state of Alaska on the Clarke 1866 spheroid or NAD27, but not with other datums and spheroids. Distortion increases as distance from Alaska increases.

Page 332: ERDAS Field Guide

302 Map Projections

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Page 333: ERDAS Field Guide

Map Projections 303

Albers Conical Equal Area

The Albers Conical Equal Area projection is mathematically based on a cone that is conceptually secant on two parallels. There is no areal deformation. The North or South Pole is represented by an arc. It retains its properties at various scales, and individual sheets can be joined along their edges.

This projection produces very accurate area and distance measurements in the middle latitudes (Figure 89). Thus, Albers Conical Equal Area is well-suited to countries or continents where north-south depth is about 3/5 the breadth of east-west. When this projection is used for the continental US, the two standard parallels are 29.5° and 45.5° North.

Construction Cone

Property Equal-area

Meridians Meridians are straight lines converging on the polar axis, but not at the pole.

Parallels Parallels are arcs of concentric circles concave toward a pole.

Graticule spacing

Meridian spacing is equal on the standard parallels and decreases toward the poles. Parallel spacing decreases away from the standard parallels and increases between them. Meridians and parallels intersect each other at right angles. The graticule spacing preserves the property of equivalence of area. The graticule is symmetrical.

Linear scaleLinear scale is true on the standard parallels. Maximum scale error is 1.25% on a map of the United States (48 states) with standard parallels of 29.5°N and 45.5°N.

Uses

Used for thematic maps. Used for large countries with an east-west orientation. Maps based on the Albers Conical Equal Area for Alaska use standard parallels 55°N and 65°N; for Hawaii, the standard parallels are 8°N and 18°N. The National Atlas of the United States, United States Base Map (48 states), and the Geologic map of the United States are based on the standard parallels of 29.5°N and 45.5°N.

Page 334: ERDAS Field Guide

304 Map Projections

This projection possesses the property of equal-area, and the standard parallels are correct in scale and in every direction. Thus, there is no angular distortion (i.e., meridians intersect parallels at right angles), and conformality exists along the standard parallels. Like other conics, Albers Conical Equal Area has concentric arcs for parallels and equally spaced radii for meridians. Parallels are not equally spaced, but are farthest apart between the standard parallels and closer together on the north and south edges. Albers Conical Equal Area is the projection exclusively used by the USGS for sectional maps of all 50 states of the US in the National Atlas of 1970.

PromptsThe following prompts display in the Projection Chooser once Albers Conical Equal Area is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Latitude of 1st standard parallel

Latitude of 2nd standard parallel

Enter two values for the desired control lines of the projection (i.e., the standard parallels). Note that the first standard parallel is the southernmost. Then, define the origin of the map projection in both spherical and rectangular coordinates.

Longitude of central meridian

Latitude of origin of projection

Enter values for longitude of the desired central meridian and latitude of the origin of projection.

False easting at central meridian

False northing at origin

Enter values of false easting and false northing, corresponding to the intersection of the central meridian and the latitude of the origin of projection. These values must be in meters. It is often convenient to make them large enough to prevent negative coordinates from occurring within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Page 335: ERDAS Field Guide

Map Projections 305

Figure 89: Albers Conical Equal Area Projection

In Figure 89, the standard parallels are 20°N and 60°N. Note the change in spacing of the parallels.

Page 336: ERDAS Field Guide

306 Map Projections

Azimuthal Equidistant

The Azimuthal Equidistant projection is mathematically based on a plane tangent to the Earth. The entire Earth can be represented, but generally less than one hemisphere is portrayed, though the other hemisphere can be portrayed, but is much distorted. It has true direction and true distance scaling from the point of tangency.

Construction Plane

Property Equidistant

Meridians

Polar aspect: the meridians are straight lines radiating from the point of tangency.

Oblique aspect: the meridians are complex curves concave toward the point of tangency.

Equatorial aspect: the meridians are complex curves concave toward a straight central meridian, except the outer meridian of a hemisphere, which is a circle.

Parallels

Polar aspect: the parallels are concentric circles.

Oblique aspect: the parallels are complex curves.

Equatorial aspect: the parallels are complex curves concave toward the nearest pole; the Equator is straight.

Graticule spacing

Polar aspect: the meridian spacing is equal and increases away from the point of tangency. Parallel spacing is equidistant. Angular and area deformation increase away from the point of tangency.

Linear scale

Polar aspect: linear scale is true from the point of tangency along the meridians only.

Oblique and equatorial aspects: linear scale is true from the point of tangency. In all aspects, the projection shows distances true to scale when measured between the point of tangency and any other point on the map.

Uses

The Azimuthal Equidistant projection is used for radio and seismic work, as every place in the world is shown at its true distance and direction from the point of tangency. The USGS uses the oblique aspect in the National Atlas and for large-scale mapping of Micronesia. The polar aspect is used as the emblem of the United Nations.

Page 337: ERDAS Field Guide

Map Projections 307

This projection is used mostly for polar projections because latitude rings divide meridians at equal intervals with a polar aspect (Figure 90). Linear scale distortion is moderate and increases toward the periphery. Meridians are equally spaced, and all distances and directions are shown accurately from the central point. This projection can also be used to center on any point on the Earth (e.g., a city) and distance measurements are true from that central point. Distances are not correct or true along parallels, and the projection is neither equal-area nor conformal. Also, straight lines radiating from the center of this projection represent great circles.

PromptsThe following prompts display in the Projection Chooser if Azimuthal Equidistant is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Define the center of the map projection in both spherical and rectangular coordinates.

Longitude of center of projection

Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Page 338: ERDAS Field Guide

308 Map Projections

Figure 90: Polar Aspect of the Azimuthal Equidistant Projection

This projection is commonly used in atlases for polar maps.

Page 339: ERDAS Field Guide

Map Projections 309

Behrmann With the exception of compression in the horizontal direction and expansion in the vertical direction, the Behrmann projection is the same as the Lambert Cylindrical Equal-area projection. These changes prevent distortion at latitudes 30° N and S instead of at the Equator.

Source: Snyder and Voxland, 1989

PromptsThe following prompts display in the Projection Chooser once Behrmann is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Enter a value of the longitude of the desired central meridian.False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Construction Cylindrical

Property Equal-area

Meridians Straight parallel lines that are equally spaced and 0.42 the length of the Equator.

Parallels Straight lines that are unequally spaced and farthest apart near the Equator, perpendicular to meridians.

Graticule spacing See Meridians and Parallels. Poles are straight lines the same length as the Equator. Symmetry is present about any meridian or the Equator.

Linear scale Scale is true along latitudes 30° N and S.

Uses Used for creating world maps.

Page 340: ERDAS Field Guide

310 Map Projections

Figure 91: Behrmann Cylindrical Equal-Area Projection

Source: Snyder and Voxland, 1989

Page 341: ERDAS Field Guide

Map Projections 311

Bonne The Bonne projection is an equal-area projection. True scale is achievable along the central meridian and all parallels. Although it was used in the 1800s and early 1900s, Bonne was replaced by Lambert Azimuthal Equal Area by the mapping company Rand McNally & Co. and Hammond, Inc. (see Lambert Azimuthal Equal Area on page 353)

Source: Environmental Systems Research Institute, 1997

PromptsThe following prompts display in the Projection Chooser once Bonne is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Latitude of standard parallel

Longitude of central meridian

Enter values of the latitude of standard parallel and the longitude of central meridian.

False easting

False northing

Construction Pseudocone

Property Equal-area

Meridians N/A

Parallels Parallels are concentric arcs that are equally spaced.

Graticule spacing

The central meridian is a linear graticule.

Linear scale Scale is true along the central meridian and all parallels.

Uses This projection is best used on maps of continents and small areas. There is some distortion.

Page 342: ERDAS Field Guide

312 Map Projections

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Figure 92: Bonne Projection

Source: Snyder and Voxland, 1989

Page 343: ERDAS Field Guide

Map Projections 313

Cassini The Cassini projection is a transverse cylindrical projection and is neither equal-area or conformal. It is best used for areas that are mostly in the north-south direction.

Source: Environmental Systems Research Institute, 1997

PromptsThe following prompts display in the Projection Chooser once Cassini is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Scale Factor

Enter the scale factor.Longitude of central meridian

Latitude of origin of projection

Enter the values for longitude of central meridian and latitude of origin of projection.

False easting

False northing

Construction Cylinder

Property Compromise

Meridians N/A

Parallels N/A

Graticule spacing

Linear graticules are located at the Equator, central meridian, as well as meridians 90° from the central meridian.

Linear scaleWith increasing distance from central meridian, scale distortion increases. Scale is true along the central meridian and lines perpendicular to the central meridian.

UsesCassini is used for large maps of areas near the central meridian. The extent is 5 degrees to either side of the central meridian.

Page 344: ERDAS Field Guide

314 Map Projections

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Figure 93: Cassini Projection

Source: Snyder and Voxland, 1989

Page 345: ERDAS Field Guide

Map Projections 315

Cylindrical Equal Area

The Cylindrical Equal Area projection is suited for equal-area mapping of regions that are predominately bordering the Equator. There is shape distortion but no area distortion in this projection, and shape distortion in the polar regions is extreme. This projection was presented by Johann Heinrich Lambert in 1772, thus it is also known as the Lambert Cylindrical Equal Area projection.

Source: Snyder and Voxland, 1989 and Snyder, 1987

PromptsThe following prompts display in the Projection Chooser once Cylindrical Equal Area is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

The list of available spheroids is located in Table 44 on page 243.

Latitude of standard parallel

Longitude of central meridian

Enter the value for the latitude of the standard parallel and the value for the longitude of the central meridian.

False easting

False northing

Construction Cylindrical

Property Equal-area

Meridians Straight parallel lines that are equally spaced and 0.32 the length of the Equator.

Parallels Straight lines that are unequally spaced and farthest apart near the Equator, perpendicular to meridians.

Graticule spacing See Meridians and Parallels. Poles are straight lines the same length as the Equator. Symmetry is present about any meridian or the Equator.

Linear scale Scale is true along the Equator. Equal area is maintained by scale increasing with distance from Equator in direction of parallels and scale decreasing in direction of meridians. Same scale at the parallel of opposite sign.

Uses Equal area mapping of regions predominately bordering the Equator.

Page 346: ERDAS Field Guide

316 Map Projections

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Figure 94: Cylindrical Equal-Area Projection

Source: Snyder and Voxland, 1989

Page 347: ERDAS Field Guide

Map Projections 317

Double Stereographic

Double Stereographic projection is a variation of Stereographic projection and is also used to represent polar areas. Points are projected from a position on the opposite side of the globe onto a plane tangent to the Earth. Double Stereographic is the term used in ESRI software for Oblique Stereographic case (EPSG code 9809) which uses equations from the EPSG. In contrast, the Stereographic case uses the USGS equations of Snyder. The Double Stereographic projection is used for large-scale coordinate systems in the Netherlands and New Brunswick.

PromptsThe following prompts display in the Projection Chooser if Stereographic is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Scale Factor

Construction Plane

Property Conformal

Meridians All meridians are straight lines or circular arcs.

Parallels

All parallels are straight lines or circular arcs.

Oblique aspect: the parallels are concave toward the the poles with one parallel being a straight line.

Equatorial aspect: parallels curve in opposite directions oneither side of the Equator

Graticule spacing

Graticule intersections are 90 degrees.

Linear scale Scale increases in movement from the center.

Page 348: ERDAS Field Guide

318 Map Projections

Designate the desired scale factor. This parameter is used to modify scale distortion. A value of one indicates true scale only along the central meridian. It may be desirable to have true scale along two lines equidistant from and parallel to the central meridian, or to lessen scale distortion away from the central meridian. A factor of less than, but close to one is often used.

Longitude of center of projection

Latitude of center of projection

Enter the values for the longitude of the center of the projection and the latitude of the center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.Source: Environmental Systems Research Institute, 2000 and Geotoolkit.org, 2009a

Page 349: ERDAS Field Guide

Map Projections 319

Eckert I A great amount of distortion at the Equator is due to the break at the Equator.

Source: Snyder and Voxland, 1989

Prompts The following prompts display in the Projection Chooser once Eckert I is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Enter a value of the longitude of the desired central meridian.False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Construction Pseudocylinder

Property Neither conformal nor equal-area

Meridians Meridians are converging straight lines that are equally spaced and broken at the Equator.

Parallels Parallels are perpendicular to the central meridian, equally spaced straight parallel lines.

Graticule spacing See Meridians and Parallels. Poles are lines one half the length of the Equator. Symmetry exists about the central meridian or the Equator.

Linear scale Scale is true along latitudes 47° 10’ N and S. Scale is constant at any latitude (and latitude of opposite sign) and any meridian.

Uses This projection is used as a novelty to show a straight-line graticule.

Page 350: ERDAS Field Guide

320 Map Projections

Figure 95: Eckert I Projection

Source: Snyder and Voxland, 1989

Page 351: ERDAS Field Guide

Map Projections 321

Eckert II The break at the Equator creates a great amount of distortion there. Eckert II is similar to the Eckert I projection. The Eckert I projection has meridians positioned identically to Eckert II, but the Eckert I projection has equidistant parallels.

Source: Snyder and Voxland, 1989

Prompts The following prompts display in the Projection Chooser once Eckert II is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Enter a value of the longitude of the desired central meridian.False easting

False northing

Construction Pseudocylinder

Property Equal-area

Meridians Meridians are straight lines that are equally spaced and broken at the Equator. Central meridian is one half as long as the Equator.

Parallels Parallels are straight parallel lines that are unequally spaced. The greatest separation is close to the Equator. Parallels are perpendicular to the central meridian.

Graticule spacing See Meridians and Parallels. Pole lines are half the length of the Equator. Symmetry exists at the central meridian or the Equator.

Linear scale Scale is true along altitudes 55° 10’ N, and S. Scale is constant along any latitude.

Uses This projection is used as a novelty to show straight-line equal-area graticule.

Page 352: ERDAS Field Guide

322 Map Projections

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Figure 96: Eckert II Projection

Source: Snyder and Voxland, 1989

Page 353: ERDAS Field Guide

Map Projections 323

Eckert III In the Eckert III projection, “no point is free of all scale distortion, but the Equator is free of angular distortion” (Snyder and Voxland, 1989).

Source: Environmental Systems Research Institute, 1997

PromptsThe following prompts display in the Projection Chooser once Eckert III is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Enter a value of the longitude of the desired central meridian.False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Construction Pseudocylinder

Property Area is not preserved.

Meridians Meridians are elliptical curves that are equally spaced elliptical curves. The meridians +/- 180° from the central meridian are semicircles. The poles and the central meridian are straight lines one half the length of the Equator.

Parallels Parallels are equally spaced straight lines.

Graticule spacing See Meridians and Parallels. Pole lines are half the length of the Equator. Symmetry exists at the central meridian or the Equator.

Linear scale Scale is correct only along 37° and 55’ N and S. Features close to poles are compressed in the north-south direction.

Uses Used for mapping the world.

Page 354: ERDAS Field Guide

324 Map Projections

Figure 97: Eckert III Projection

Source: Snyder and Voxland, 1989

Page 355: ERDAS Field Guide

Map Projections 325

Eckert IV The Eckert IV projection is best used for thematic maps of the globe. An example of a thematic map is one depicting land cover.

Source: Environmental Systems Research Institute, 1997

PromptsThe following prompts display in the Projection Chooser once Eckert IV is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Enter a value of the longitude of the desired central meridian.False easting

False northing

Construction Pseudocylinder

Property Equal-area

Meridians Meridians are elliptical arcs that are equally spaced.

Parallels Parallels are straight lines that are unequally spaced and closer together at the poles.

Graticule spacing

See Meridians and Parallels. The poles and the central meridian are straight lines one half the length of the Equator.

Linear scale

“Scale is distorted north-south 40 percent along the Equator relative to the east-west dimension. This distortion decreases to zero at 40° 30’ N and S and at the central meridian. Scale is correct only along these parallels. Nearer the poles, features are compressed in the north-south direction” (Environmental Systems Research Institute, 1997).

Uses Use for world maps only.

Page 356: ERDAS Field Guide

326 Map Projections

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Figure 98: Eckert IV Projection

Source: Snyder and Voxland, 1989

Page 357: ERDAS Field Guide

Map Projections 327

Eckert V The Eckert V projection is only supported on a sphere. Like Eckert III, no point is free of all scale distortion, but the Equator is free of angular distortion” (Snyder and Voxland, 1989).

Source: Environmental Systems Research Institute, 1997

PromptsThe following prompts display in the Projection Chooser once Eckert V is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Enter a value of the longitude of the desired central meridian.False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Construction Pseudocylinder

Property Area is not preserved.

Meridians Meridians are sinusoidal curves that are equally spaced. The poles and the central meridian are straight lines one half as long as the Equator.

Parallels Parallels are straight lines that are equally spaced.

Graticule spacing See Meridians and Parallels.

Linear scale Scale is correct only along 37° 55’ N and S. Features near the poles are compressed in the north-south direction.

Uses This projection is best used for thematic world maps.

Page 358: ERDAS Field Guide

328 Map Projections

Figure 99: Eckert V Projection

Source: Snyder and Voxland, 1989

Page 359: ERDAS Field Guide

Map Projections 329

Eckert VI The Eckert VI projection is best used for thematic maps. An example of a thematic map is one depicting land cover.

Source: Environmental Systems Research Institute, 1997

PromptsThe following prompts display in the Projection Chooser once Eckert VI is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Enter a value of the longitude of the desired central meridian.False easting

False northing

Construction Pseudocylinder

Property Equal-area

Meridians Meridians are sinusoidal curves that are equally spaced.

Parallels Parallels are unequally spaced straight lines, closer together at the poles.

Graticule spacing

See Meridians and Parallels. The poles and the central meridian are straight lines one half the length of the Equator.

Linear scale

“Scale is distorted north-south 29 percent along the Equator relative to the east-west dimension. This distortion decreases to zero at 49° 16’ N and S at the central meridian. Scale is correct only along these parallels. Nearer the poles, features are compressed in the north-south direction” (Environmental Systems Research Institute, 1997).

Uses Use for world maps only.

Page 360: ERDAS Field Guide

330 Map Projections

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Figure 100: Eckert VI Projection

Source: Snyder and Voxland, 1989

Page 361: ERDAS Field Guide

Map Projections 331

EOSAT SOM The EOSAT SOM projection is similar to the Space Oblique Mercator projection. The main exception to the similarity is that the EOSAT SOM projection’s X and Y coordinates are switched.

For information, see Space Oblique Mercator on page 395.

PromptsThe following prompts display in the Projection Chooser once EOSAT SOM is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Landsat vehicle ID (1-5)

Specify whether the data are from Landsat 1, 2, 3, 4, or 5.Orbital path number (1-251 or 1-233)

For Landsats 1, 2, and 3, the path range is from 1 to 251. For Landsats 4 and 5, the path range is from 1 to 233.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Page 362: ERDAS Field Guide

332 Map Projections

EPSG Coordinate Systems

The EPSG Coordinate Systems is a dataset of parameters for coordinate reference system and coordinate transformation description. This dataset is known as the EPSG Geodetic Parameter Dataset, published by the OGP Surveying and Positioning Committee. This committee was formed in 2005 by the absorption into OGP of the now-defunct European Petroleum Survey Group (EPSG).The EPSG geodetic parameter dataset is a repository of parameters required to identify coordinates such that position is described unambiguously through a coordinate reference system (CRS) definition, and define transformations and conversions to allow coordinates to be changed from one CRS to another CRS.The EPSG dataset uses numeric codes for the map projections. For example, map projection NAD83/UTM zone 17N is defined as code EPSG::26917.The EPSG geodetic parameter dataset and documentation is maintained at www.epsg-registry.org.

References: www.epsg.org

PromptsThe following prompt displays in the Projection Chooser once EPSG Coordinate Systems is selected. Respond to the prompt as described.

Projection

Select the projection to use.

Page 363: ERDAS Field Guide

Map Projections 333

Equidistant Conic With Equidistant Conic (Simple Conic) projections, correct distance is achieved along the line(s) of contact with the cone, and parallels are equidistantly spaced. It can be used with either one (A) or two (B) standard parallels.

This projection is neither conformal nor equal-area, but the north-south scale along meridians is correct. The North or South Pole is represented by an arc. Because scale distortion increases with increasing distance from the line(s) of contact, the Equidistant Conic is used mostly for mapping regions predominantly east-west in extent. The USGS uses the Equidistant Conic in an approximate form for a map of Alaska.

PromptsThe following prompts display in the Projection Chooser if Equidistant Conic is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Construction Cone

Property Equidistant

Meridians Meridians are straight lines converging on a polar axis but not at the pole.

Parallels Parallels are arcs of concentric circles concave toward a pole.

Graticule spacing

Meridian spacing is true on the standard parallels and decreases toward the pole. Parallels are placed at true scale along the meridians. Meridians and parallels intersect each other at right angles. The graticule is symmetrical.

Linear scale Linear scale is true along all meridians and along the standard parallel or parallels.

Uses

The Equidistant Conic projection is used in atlases for portraying mid-latitude areas. It is good for representing regions with a few degrees of latitude lying on one side of the Equator. It was used in the former Soviet Union for mapping the entire country (Environmental Systems Research Institute, 1992).

Page 364: ERDAS Field Guide

334 Map Projections

Define the origin of the projection in both spherical and rectangular coordinates.

Longitude of central meridian

Latitude of origin of projection

Enter values for the longitude of the desired central meridian and the latitude of the origin of projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the intersection of the central meridian and the latitude of the origin of projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

One or two standard parallels?

Latitude of standard parallel

Enter one or two values for the desired control line(s) of the projection, i.e., the standard parallel(s). Note that if two standard parallels are used, the first is the southernmost.

Figure 101: Equidistant Conic Projection

Source: Snyder and Voxland, 1989

Page 365: ERDAS Field Guide

Map Projections 335

Equidistant Cylindrical

The Equidistant Cylindrical projection is similar to the Equirectangular projection.

For information, see Equirectangular (Plate Carrée) on page 336.

PromptsThe following prompts display in the Projection Chooser if Equidistant Cylindrical is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of standard parallel

Latitude of true scale

Enter a value for longitude of the standard parallel and the latitude of true scale.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Page 366: ERDAS Field Guide

336 Map Projections

Equirectangular (Plate Carrée)

Also called Simple Cylindrical, Equirectangular is composed of equally spaced, parallel meridians and latitude lines that cross at right angles on a rectangular map. Each rectangle formed by the grid is equal in area, shape, and size.

Equirectangular is not conformal nor equal-area, but it does contain less distortion than the Mercator in polar regions. Scale is true on all meridians and on the central parallel. Directions due north, south, east, and west are true, but all other directions are distorted. The Equator is the standard parallel, true to scale and free of distortion. However, this projection may be centered anywhere. This projection is valuable for its ease in computer plotting. It is useful for mapping small areas, such as city maps, because of its simplicity. The USGS uses Equirectangular for index maps of the conterminous US with insets of Alaska, Hawaii, and various islands. However, neither scale nor projection is marked to avoid implying that the maps are suitable for normal geographic information.

PromptsThe following prompts display in the Projection Chooser if Equirectangular is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

Construction Cylinder

Property Compromise

Meridians All meridians are straight lines.

Parallels All parallels are straight lines.

Graticule spacing

Equally spaced parallel meridians and latitude lines cross at right angles.

Linear scaleThe scale is correct along all meridians and along the standard parallels (Environmental Systems Research Institute, 1992).

Uses

Best used for city maps, or other small areas with map scales small enough to reduce the obvious distortion. Used for simple portrayals of the world or regions with minimal geographic data, such as index maps (Environmental Systems Research Institute, 1992).

Page 367: ERDAS Field Guide

Map Projections 337

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Latitude of true scale

Enter a value for longitude of the desired central meridian to center the projection and the latitude of true scale.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Figure 102: Equirectangular Projection

Source: Snyder and Voxland, 1989

Page 368: ERDAS Field Guide

338 Map Projections

Gall Stereographic The Gall Stereographic projection was created in 1855. The two standard parallels are located at 45° N and 45° S. This projection is used for world maps.

Source: Environmental Systems Research Institute, 1997

PromptsThe following prompts display in the Projection Chooser once Gall Stereographic is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Enter a value for longitude of the desired central meridian. False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Construction Cylinder

Property Compromise

Meridians Meridians are straight lines that are equally spaced.

Parallels Parallels are straight lines that have increased space with distance from the Equator.

Graticule spacing

All meridians and parallels are linear.

Linear scale

“Scale is true in all directions along latitudes 45° N and S. Scale is constant along parallels and is symmetrical around the Equator. Distances are compressed between latitudes 45° N and S, and expanded beyond them” (Environmental Systems Research Institute, 1997).

Uses Use for world maps only.

Page 369: ERDAS Field Guide

Map Projections 339

Gauss Kruger The Gauss Kruger projection is the same as the Transverse Mercator projection, with the exception that Gauss Kruger uses a fixed scale factor of 1. Gauss Kruger is available only in ellipsoidal form.Many countries such as China and Germany use Gauss Kruger in 3-degree zones instead of 6-degree zones for UTM.

For more information, see Transverse Mercator on page 412.

PromptsThe following prompts display in the Projection Chooser once Gauss Kruger is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.Scale factor

Designate the desired scale factor. This parameter is used to modify scale distortion. A value of one indicates true scale only along the central meridian. It may be desirable to have true scale along two lines equidistant from and parallel to the central meridian, or to lessen scale distortion away from the central meridian. A factor of less than, but close to one is often used.

Longitude of central meridian

Enter a value for the longitude of the desired central meridian to center the projection.

Latitude of origin of projection

Enter the value for the latitude of origin of projection.False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Page 370: ERDAS Field Guide

340 Map Projections

General Vertical Near-side Perspective

General Vertical Near-side Perspective presents a picture of the Earth as if a photograph were taken at some distance less than infinity. The map user simply identifies area of coverage, distance of view, and angle of view. It is a variation of the General Perspective projection in which the “camera” precisely faces the center of the Earth.

Central meridian and a particular parallel (if shown) are straight lines. Other meridians and parallels are usually arcs of circles or ellipses, but some may be parabolas or hyperbolas. Like all perspective projections, General Vertical Near-side Perspective cannot illustrate the entire globe on one map—it can represent only part of one hemisphere.

Construction Plane

Property Compromise

Meridians

The central meridian is a straight line in all aspects. In the polar aspect all meridians are straight. In the equatorial aspect the Equator is straight (Environmental Systems Research Institute, 1992).

ParallelsParallels on vertical polar aspects are concentric circles. Nearly all other parallels are elliptical arcs, except that certain angles of tilt may cause some parallels to be shown as parabolas or hyperbolas.

Graticule spacing

Polar aspect: parallels are concentric circles that are not evenly spaced. Meridians are evenly spaced and spacing increases from the center of the projection.

Equatorial and oblique aspects: parallels are elliptical arcs that are not evenly spaced. Meridians are elliptical arcs that are not evenly spaced, except for the central meridian, which is a straight line.

Linear scale

Radial scale decreases from true scale at the center to zero on the projection edge. The scale perpendicular to the radii decreases, but not as rapidly (Environmental Systems Research Institute, 1992).

Uses

Often used to show the Earth or other planets and satellites as seen from space. Used as an aesthetic presentation, rather than for technical applications (Environmental Systems Research Institute, 1992).

Page 371: ERDAS Field Guide

Map Projections 341

PromptsThe following prompts display in the Projection Chooser if General Vertical Near-side Perspective is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Height of perspective point

Enter a value for the desired height of the perspective point above the sphere in the same units as the radius. Then, define the center of the map projection in both spherical and rectangular coordinates.

Longitude of center of projection

Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Page 372: ERDAS Field Guide

342 Map Projections

Geographic (Lat/Lon)

The Geographic is a spherical coordinate system composed of parallels of latitude (Lat) and meridians of longitude (Lon) (Figure 103). Both divide the circumference of the Earth into 360 degrees. Degrees are further subdivided into minutes and seconds (60 sec = 1 minute, 60 min = 1 degree). Because the Earth spins on an axis between the North and South Poles, this allows construction of concentric, parallel circles, with a reference line exactly at the north-south center, termed the Equator. The series of circles north of the Equator is termed north latitudes and runs from 0° latitude (the Equator) to 90° North latitude (the North Pole), and similarly southward. Position in an east-west direction is determined from lines of longitude. These lines are not parallel, and they converge at the poles. However, they intersect lines of latitude perpendicularly.Unlike the Equator in the latitude system, there is no natural zero meridian. In 1884, it was finally agreed that the meridian of the Royal Observatory in Greenwich, England, would be the prime meridian. Thus, the origin of the geographic coordinate system is the intersection of the Equator and the prime meridian. Note that the 180° meridian is the international date line. If you choose Geographic from the Projection Chooser, the following prompts display:

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Note that in responding to prompts for other projections, values for longitude are negative west of Greenwich and values for latitude are negative south of the Equator.

Page 373: ERDAS Field Guide

Map Projections 343

Figure 103: Geographic Projection

Figure 103 shows the graticule of meridians and parallels on the global surface.

0

30

60

Parallel

Equator

North Pole

(Latitude)

Meridian (Longitude)

3 0

6 0

Page 374: ERDAS Field Guide

344 Map Projections

Gnomonic Gnomonic is a perspective projection that projects onto a tangent plane from a position in the center of the Earth. Because of the close perspective, this projection is limited to less than a hemisphere. However, it is the only projection which shows all great circles as straight lines. With a polar aspect, the latitude intervals increase rapidly from the center outwards.

With an equatorial or oblique aspect, the Equator is straight. Meridians are straight and parallel, while intervals between parallels increase rapidly from the center and parallels are convex to the Equator. Because great circles are straight, this projection is useful for air and sea navigation. Rhumb lines are curved, which is the opposite of the Mercator projection.

Construction Plane

Property Compromise

Meridians

Polar aspect: the meridians are straight lines radiating from the point of tangency.

Oblique and equatorial aspects: the meridians are straight lines.

Parallels

Polar aspect: the parallels are concentric circles.

Oblique and equatorial aspects: parallels are ellipses, parabolas, or hyperbolas concave toward the poles (except for the Equator, which is straight).

Graticule spacing

Polar aspect: the meridian spacing is equal and increases away from the pole. The parallel spacing increases rapidly from the pole.

Oblique and equatorial aspects: the graticule spacing increases very rapidly away from the center of the projection.

Linear scaleLinear scale and angular and areal deformation are extreme, rapidly increasing away from the center of the projection.

UsesThe Gnomonic projection is used in seismic work because seismic waves travel in approximately great circles. It is used with the Mercator projection for navigation.

Page 375: ERDAS Field Guide

Map Projections 345

PromptsThe following prompts display in the Projection Chooser if Gnomonic is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Define the center of the map projection in both spherical and rectangular coordinates.

Longitude of center of projection

Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of the projection. These values must be in meters. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Page 376: ERDAS Field Guide

346 Map Projections

Hammer

The Hammer projection is useful for mapping the world. In particular, the Hammer projection is suited for thematic maps of the world, such as land cover.Source: Environmental Systems Research Institute, 1997

PromptsThe following prompts display in the Projection Chooser once Hammer is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Enter a value for longitude of the desired central meridian. False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Construction Modified Azimuth

Property Equal-area

MeridiansThe central meridian is half as long as the Equator and a straight line. Others are curved and concave toward the central meridian and unequally spaced.

ParallelsWith the exception of the Equator, all parallels are complex curves that have a concave shape toward the nearest pole.

Graticule spacing

Only the Equator and central meridian are straight lines.

Linear scale Scale lessens along the Equator and central meridian as proximity to the origin grows.

Uses Use for world maps only.

Page 377: ERDAS Field Guide

Map Projections 347

Figure 104: Hammer Projection

Source: Snyder and Voxland, 1989

Page 378: ERDAS Field Guide

348 Map Projections

Interrupted Goode Homolosine

The Interrupted Goode Homolosine projection is an equal area pseudocylindrical projection developed by J.P. Goode in 1923. The projection is interrupted to reduce distortion of major land areas. This projection is a combination of the Mollweide projection (also called Homolographic) used for higher latitudes, and the Sinusoidal projection (Goode 1925) used for lower latitudes. The two projections join at 40° 44’11.8” North and South, where the linear scale of the projections match. This projection is suitable for thematic or distribution mappings of the entire world, and has been chosen for two USGS projects: Global Advanced Very High Resolution Radiometer (AVHRR) 1-km data set project, and AVHRR Pathfinder project.Source: United States Geological Survey (USGS) 2009.

Source: Snyder and Voxland, 1989

PromptsThe following prompts display in the Projection Chooser once Interrupted Goode Homolosine is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

Construction Pseudocylindrical

Property Equal-area

Meridians “In the interrupted form, there are six central meridians, each a straight line 0.22 as long as the Equator but not crossing the Equator. Other meridians are equally spaced sinusoidal curves between latitudes 40° 44’ N and S. and elliptical arcs elsewhere, all concave toward the central meridian. There is a slight bend in meridians at the 40° 44’ latitudes” (Snyder and Voxland, 1989).

Parallels Parallels are straight parallel lines, which are perpendicular to the central meridians. Between latitudes 40° 44’ N and S, they are equally spaced. Parallels gradually get closer together closer to the poles.

Graticule spacing See Meridians and Parallels. Poles are points. Symmetry is nonexistent in the interrupted form.

Linear scale Scale is true at each latitude between 40° 44’ N and S along the central meridian within the same latitude range. Scale varies with increased latitudes.

Uses This projection is useful for world maps.

Page 379: ERDAS Field Guide

Map Projections 349

Figure 105: Interrupted Goode Homolosine Projection

Source: Snyder and Voxland, 1989

Page 380: ERDAS Field Guide

350 Map Projections

Interrupted Mollweide

The interrupted Mollweide projection reduces the distortion of the Mollweide projection. It is interrupted into six regions with fixed parameters for each region.Source: Snyder and Voxland, 1989

For more information, see Mollweide on page 372.

PromptsThe following prompts display in the Projection Chooser once Interrupted Mollweide is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

Figure 106: Interrupted Mollweide Projection

Source: Snyder and Voxland, 1989

Page 381: ERDAS Field Guide

Map Projections 351

Krovak The Krovak Oblique Conformal Conic projection (EPSG code 9819) is an oblique aspect of Lambert Conformal Conic projection. This projection is used in the Czech Republic and Slovakia under the name Krovak projection. The projection method is a conic projection based on one standard parallel, and the lines of contact are two pseudo-standard parallels.Source: Environmental Systems Research Institute, 2000 and Geotoolkit.org, 2009b

PromptsThe following prompts display in the Projection Chooser once Krovak is selected. Enter the values for the following prompts:

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Scale factor

Designate the desired scale factor. This parameter is used to modify scale distortion. A value of one indicates true scale only along the central meridian. It may be desirable to have true scale along two lines equidistant from and parallel to the central meridian, or to lessen scale distortion away from the central meridian. A factor of less than, but close to one is often used.

Azimuth

Enter the value for the azimuth of the center line passing through the center of the projection.

For a description of azimuth, see Projections.

Longitude of center of projection

Latitude of center of projection

Enter the values for the longitude of the center of the projection and the latitude of the center of the projection.

False easting

False northing

Page 382: ERDAS Field Guide

352 Map Projections

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

XY plane rotation

Enter the value to define the orientation of the projection along with the X scale and Y scale prompts.

Pseudo standard parallel 1

Enter the value for the latitude of the pseudo standard parallel.X scale

Y scale

Enter the values to orient the X axis and the Y axis.

Page 383: ERDAS Field Guide

Map Projections 353

Lambert Azimuthal Equal Area

The Lambert Azimuthal Equal Area projection is mathematically based on a plane tangent to the Earth. It is the only projection that can accurately represent both area and true direction from the center of the projection (Figure 107 on page 355). This central point can be located anywhere. Concentric circles are closer together toward the edge of the map, and the scale distorts accordingly. This projection is well-suited to square or round land masses. This projection generally represents only one hemisphere.

In the polar aspect, latitude rings decrease their intervals from the center outwards. In the equatorial aspect, parallels are curves flattened in the middle. Meridians are also curved, except for the central meridian, and spacing decreases toward the edges.

Construction Plane

Property Equal-area

Meridians

Polar aspect: the meridians are straight lines radiating from the point of tangency. Oblique and equatorial aspects: meridians are complex curves concave toward a straight central meridian, except the outer meridian of a hemisphere, which is a circle.

ParallelsPolar aspect: parallels are concentric circles. Oblique and equatorial aspects: the parallels are complex curves. The Equator on the equatorial aspect is a straight line.

Graticule spacing

Polar aspect: the meridian spacing is equal and increases, and the parallel spacing is unequal and decreases toward the periphery of the projection. The graticule spacing, in all aspects, retains the property of equivalence of area.

Linear scale

Linear scale is better than most azimuthals, but not as good as the equidistant. Angular deformation increases toward the periphery of the projection. Scale decreases radially toward the periphery of the map projection. Scale increases perpendicular to the radii toward the periphery.

UsesThe polar aspect is used by the USGS in the National Atlas. The polar, oblique, and equatorial aspects are used by the USGS for the Circum-Pacific Map.

Page 384: ERDAS Field Guide

354 Map Projections

PromptsThe following prompts display in the Projection Chooser if Lambert Azimuthal Equal Area is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Define the center of the map projection in both spherical and rectangular coordinates.

Longitude of center of projection

Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of the projection. These values must be in meters. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Calculation Method:

Unspecified

Sphere Formulas

Ellipsoid Formulas

Select a method by which the transformation is calculated:

• Sphere Formulas - Calculation is based on the model of the Earth as a sphere. This option is less accurate since the Earth is an ellipsoid, not a sphere; however it may be satisfactory for some maps.

• Ellipsoid Formulas - Calculation is based on the model of the Earth as an ellipsoid. This option provides the more accurate reprojection within IMAGINE.

• Unspecified - Model used is unknown or not specified.

Refer to Spheroids for more information.

Page 385: ERDAS Field Guide

Map Projections 355

In Figure 107 on page 355, three views of the Lambert Azimuthal Equal Area projection are shown: A) Polar aspect, showing one hemisphere; B) Equatorial aspect, frequently used in old atlases for maps of the eastern and western hemispheres; C) Oblique aspect, centered on 40°N.

Figure 107: Lambert Azimuthal Equal Area Projection

A

B

C

Page 386: ERDAS Field Guide

356 Map Projections

Lambert Conformal Conic

This projection is very similar to Albers Conical Equal Area, described previously. It is mathematically based on a cone that is tangent at one parallel or, more often, that is conceptually secant on two parallels (Figure 108 on page 358). Areal distortion is minimal, but increases away from the standard parallels. North or South Pole is represented by a point—the other pole cannot be shown. Great circle lines are approximately straight. It retains its properties at various scales, and sheets can be joined along their edges. This projection, like Albers, is most valuable in middle latitudes, especially in a country sprawling east to west like the US. The standard parallels for the US are 33° and 45°N.

The major property of this projection is its conformality. At all coordinates, meridians and parallels cross at right angles. The correct angles produce correct shapes. Also, great circles are approximately straight. The conformal property of Lambert Conformal Conic, and the straightness of great circles makes it valuable for landmark flying.

Construction Cone

Property Conformal

Meridians Meridians are straight lines converging at a pole.

Parallels Parallels are arcs of concentric circles concave toward a pole and centered at a pole.

Graticule spacing

Meridian spacing is true on the standard parallels and decreases toward the pole. Parallel spacing increases away from the standard parallels and decreases between them. Meridians and parallels intersect each other at right angles. The graticule spacing retains the property of conformality. The graticule is symmetrical.

Linear scaleLinear scale is true on standard parallels. Maximum scale error is 2.5% on a map of the United States (48 states) with standard parallels at 33°N and 45°N.

Uses

Used for large countries in the mid-latitudes having an east-west orientation. The United States (50 states) Base Map uses standard parallels at 37°N and 65°N. Some of the National Topographic Map Series 7.5-minute and 15-minute quadrangles, and the State Base Map series are constructed on this projection. The latter series uses standard parallels of 33°N and 45°N. Aeronautical charts for Alaska use standard parallels at 55°N and 65°N. The National Atlas of Canada uses standard parallels at 49°N and 77°N.

Page 387: ERDAS Field Guide

Map Projections 357

Lambert Conformal Conic is the State Plane coordinate system projection for states of predominant east-west expanse. Since 1962, Lambert Conformal Conic has been used for the International Map of the World between 84°N and 80°S. In comparison with Albers Conical Equal Area, Lambert Conformal Conic possesses true shape of small areas, whereas Albers possesses equal-area. Unlike Albers, parallels of Lambert Conformal Conic are spaced at increasing intervals the farther north or south they are from the standard parallels.

PromptsThe following prompts display in the Projection Chooser if Lambert Conformal Conic is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Latitude of 1st standard parallel

Latitude of 2nd standard parallel

Enter two values for the desired control lines of the projection, that is, the standard parallels. Note that the first standard parallel is the southernmost. Then, define the origin of the map projection in both spherical and rectangular coordinates.

Longitude of central meridian

Latitude of origin of projection

If you only have one standard parallel you should enter that same value into all three latitude fields.

Enter values for longitude of the desired central meridian and latitude of the origin of projection.

False easting at central meridian

False northing at origin

Enter values of false easting and false northing corresponding to the intersection of the central meridian, and the latitude of the origin of projection. These values must be in meters. It is often convenient to make them large enough to ensure that there are no negative coordinates within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Page 388: ERDAS Field Guide

358 Map Projections

Figure 108: Lambert Conformal Conic Projection

In the figure above, the standard parallels are 20°N and 60°N. Note the change in spacing of the parallels.

Page 389: ERDAS Field Guide

Map Projections 359

Lambert Conic Conformal (1 Standard Parallel)

This projection is very similar to Lambert Conformal Conic, described previously. However, rather than specifying two standard parallels, one standard parallel and a scale factor are specified.Conical projections with one standard parallel are normally considered to maintain the nominal map scale along the parallel of latitude which is the line of contact between the imagined cone and the ellipsoid. For a one standard parallel Lambert, the natural origin of the projected coordinate system is the intersection of the standard parallel with the longitude of origin (central meridian). To maintain the conformal property, the spacing of the parallels is variable and increases with increasing distance from the standard parallel, while the meridians are all straight lines radiating from a point on the prolongation of the ellipsoid’s minor axis.Sometimes it is desirable to limit the maximum positive scale distortion by distributing it more evenly over the map area extent. This may be achieved by introducing a scale factor on the standard parallel of slightly less than unity, thus making it unity on two parallels either side of it. Some former French territories were mapped using this method. This is the same effect as choosing two specific standard parallels in the first place. The projection is then a Lambert Conformal Conic with two standard parallels.For the one standard parallel Lambert, the latitude of natural origin is the standard parallel. The longitude of natural origin is the central meridian. Where the central meridian cuts the one standard parallel will be the natural origin of the projected coordinate system. Any number of Lambert projection zones may be formed according to which standard parallel or standard parallels are chosen, exemplified by the United States State Plane coordinate zones. They are normally chosen in the one standard parallel case to approximately bisect the latitudinal extent of the country or area.

Construction Cone

Property Conformal

Meridians Meridians are straight lines converging at a pole.

Parallels Parallels are arcs of concentric circles concave toward a pole and centered at a pole.

Graticule spacing

Meridian spacing is true on the standard parallels and decreases toward the pole. Parallel spacing increases away from the standard parallels and decreases between them. Meridians and parallels intersect each other at right angles. The graticule spacing retains the property of conformality. The graticule is symmetrical.

Page 390: ERDAS Field Guide

360 Map Projections

PromptsThe following prompts display in the Projection Chooser if Lambert Conic Conformal (1SP) is selected. Respond to the prompts as described. Select the spheroid and datum to use:

Spheroid Name

Datum Name

The list of available spheroids is located in Table 44 on page 243.

Enter the value for the desired control line of the projection, that is, the standard parallel:

Latitude of natural origin

Define the longitude of the natural origin, that is, the central meridian:Longitude of natural origin

Enter the scale factor at the natural origin on the standard parallel:Scale factor at natural origin

Enter constants of false easting and false northing corresponding to the intersection of the central meridian and the standard parallel. These values must be in meters. It is often convenient to make them large enough to ensure that there are no negative coordinates within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

False easting

False northing

Sources: Petrotechnical Open Software Corporation: Epicentre v2.2 Usage Guide. See www.posc.org/Epicentre.2_2.http://www.remotesensing.org/geotiff/geotiff.html

Linear scale Linear scale is true on standard parallels.

Uses Used for large countries in the mid-latitudes having an east-west orientation.

Page 391: ERDAS Field Guide

Map Projections 361

Loximuthal The distortion of the Loximuthal projection is average to pronounced. Distortion is not present at the central latitude on the central meridian. What is most noteworthy about the loximuthal projection is the loxodromes that are “straight, true to scale, and correct in azimuth from the center” (Snyder and Voxland, 1989).

Source: Snyder and Voxland, 1989

PromptsThe following prompts display in the Projection Chooser if Loximuthal is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Latitude of central parallel

Construction Pseudocylindrical

Property Neither conformal nor equal-area

Meridians The “central meridian is a straight line generally over half as long as the Equator, depending on the central latitude. If the central latitude is the Equator, the ratio is 0.5; if it is 40° N or S, the ratio is 0.65. Other meridians are equally spaced complex curves intersecting at the poles and concave toward the central meridian” (Snyder and Voxland, 1989).

Parallels Parallels are straight parallel lines that are equally spaced. They are perpendicular to the central meridian.

Graticule spacing See Meridians and Parallels. Poles are points. Symmetry exists about the central meridian. Symmetry also exists at the Equator if it is designated as the central latitude.

Linear scale Scale is true along the central meridian. Scale is also constant along any given latitude, but different for the latitude of opposite sign.

Uses Used for world maps where loxodromes (rhumb lines) are emphasized.

Page 392: ERDAS Field Guide

362 Map Projections

Enter a value for longitude of the desired central meridian to center the projection and the latitude of central parallel.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Figure 109: Loximuthal Projection

Source: Snyder and Voxland, 1989

Page 393: ERDAS Field Guide

Map Projections 363

Mercator This famous cylindrical projection was originally designed by Flemish map maker Gerhardus Mercator in 1569 to aid navigation (Figure 110 on page 365). Meridians and parallels are straight lines and cross at 90° angles. Angular relationships are preserved. However, to preserve conformality, parallels are placed increasingly farther apart with increasing distance from the Equator. Due to extreme scale distortion in high latitudes, the projection is rarely extended beyond 80°N or S unless the latitude of true scale is other than the Equator. Distance scales are usually furnished for several latitudes.

This projection can be thought of as being mathematically based on a cylinder tangent at the Equator. Any straight line is a constant-azimuth (rhumb) line. Areal enlargement is extreme away from the Equator; poles cannot be represented. Shape is true only within any small area. It is a reasonably accurate projection within a 15° band along the line of tangency.Rhumb lines, which show constant direction, are straight. For this reason, a Mercator map was very valuable to sea navigators. However, rhumb lines are not the shortest path—great circles are the shortest path. Most great circles appear as long arcs when drawn on a Mercator map.

Construction Cylinder

Property Conformal

Meridians Meridians are straight and parallel.

Parallels Parallels are straight and parallel.

Graticule spacing

Meridian spacing is equal and the parallel spacing increases away from the Equator. The graticule spacing retains the property of conformality. The graticule is symmetrical. Meridians intersect parallels at right angles.

Linear scale

Linear scale is true along the Equator only (line of tangency), or along two parallels equidistant from the Equator (the secant form). Scale can be determined by measuring one degree of latitude, which equals 60 nautical miles, 69 statute miles, or 111 kilometers.

Uses

An excellent projection for equatorial regions. Otherwise, the Mercator is a special-purpose map projection best suited for navigation. Secant constructions are used for large-scale coastal charts. The use of the Mercator map projection as the base for nautical charts is universal. Examples are the charts published by the National Ocean Survey, US Dept. of Commerce.

Page 394: ERDAS Field Guide

364 Map Projections

PromptsThe following prompts display in the Projection Chooser if Mercator is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Define the origin of the map projection in both spherical and rectangular coordinates.

Longitude of central meridian

Latitude of true scale

Enter values for longitude of the desired central meridian and latitude at which true scale is desired. Selection of a parameter other than the Equator can be useful for making maps in extreme north or south latitudes.

False easting at central meridian

False northing at origin

Enter values of false easting and false northing corresponding to the intersection of the central meridian and the latitude of true scale. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Page 395: ERDAS Field Guide

Map Projections 365

Figure 110: Mercator Projection

In Figure 110 on page 365, all angles are shown correctly, therefore small shapes are true (i.e., the map is conformal). Rhumb lines are straight, which makes it useful for navigation.

Page 396: ERDAS Field Guide

366 Map Projections

Miller Cylindrical Miller Cylindrical is a modification of the Mercator projection (Figure 111 on page 367). It is similar to the Mercator from the Equator to 45°, but latitude line intervals are modified so that the distance between them increases less rapidly. Thus, beyond 45°, Miller Cylindrical lessens the extreme exaggeration of the Mercator. Miller Cylindrical also includes the poles as straight lines whereas the Mercator does not.

Meridians and parallels are straight lines intersecting at right angles. Meridians are equidistant, while parallels are spaced farther apart the farther they are from the Equator. Miller Cylindrical is not equal-area, equidistant, or conformal. Miller Cylindrical is used for world maps and in several atlases.

PromptsThe following prompts display in the Projection Chooser if Miller Cylindrical is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Construction Cylinder

Property Compromise

Meridians All meridians are straight lines.

Parallels All parallels are straight lines.

Graticule spacing

Meridians are parallel and equally spaced, the lines of latitude are parallel, and the distance between them increases toward the poles. Both poles are represented as straight lines. Meridians and parallels intersect at right angles (Environmental Systems Research Institute, 1992).

Linear scaleWhile the standard parallels, or lines, that are true to scale and free of distortion, are at latitudes 45°N and S, only the Equator is standard.

Uses Useful for world maps.

Page 397: ERDAS Field Guide

Map Projections 367

Enter a value for the longitude of the desired central meridian to center the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Figure 111: Miller Cylindrical Projection

This projection resembles the Mercator, but has less distortion in polar regions. Miller Cylindrical is neither conformal nor equal-area.

Page 398: ERDAS Field Guide

368 Map Projections

MGRS The United States Military Grid Reference System (MGRS) is designed for use with UTM (Universal Transverse Mercator) and UPS (Universal Polar Stereographic) grid coordinate systems. The MGRS is an alphanumeric version of a numerical UTM grid coordinate.The world is generally divided into 6° by 8° geographic areas, each of which is given a unique identification, called the Grid Zone Designation. These areas are covered by a pattern of 100,000-meter squares. Each square is identified by two letters called tie 100,000-meter square identification. A reference keyed to a gridded map of any scale is made by giving the 100,000-meter square identification together with the numerical location. The grid zones are divided into a pattern of 100,000-meter grid squares forming a matrix of rows and columns. Each row and each column is sequentially lettered such that two letters provide a unique identification within approximately 9° for each 100,000-meter grid square. For that portion of the world where the UTM grid is specified (80° South to 84° North), the UTM grid zone number is the first element of a Military Grid reference. There are additional designations that express additional refinements. Source: National Geospatial-Intelligence Agency (NGA), 2010a.

Figure 112: MGRS Grid Zones

An example of an MGRS designation is:15SWC8081751205

The components of the MGRS values are:

Page 399: ERDAS Field Guide

Map Projections 369

Grid Zone Designation

• First two characters represent the 6° wide UTM zone.

• The third character is a letter designating a band of latitude. The 20 bands begin at 80° South and proceed northward. The letters are C through X, omitting I and O.

100,000-Meter Grid Squares

• The fourth and fifth characters designate one of the 100,000-meter grid squares within the grid zone.

Easting and Northing Values

The remaining characters represent the easting and northing values within the 100,000-meter grid square. The number of characters determines the amount of precision.

• 1 character = 10 km precision

• 2 characters = 1 km precision

• 3 characters = 100 meters precision

• 4 characters = 10 meters precision

• 5 characters = 1 meter precision

Source: National Geospatial-Intelligence Agency (NGA), 2010b.

For a complete description, see the National Geospatial Intelligence Agency website http://earth-info.nga.mil/GandG/publications. The publication is named DMA Technical Manual 8358.1.

Page 400: ERDAS Field Guide

370 Map Projections

Modified Transverse Mercator

In 1972, the USGS devised a projection specifically for the revision of a 1954 map of Alaska which, like its predecessors, was based on the Polyconic projection. This projection was drawn to a scale of 1:2,000,000 and published at 1:2,500,000 (map “E”) and 1:1,584,000 (map “B”). Graphically prepared by adapting coordinates for the UTM projection, it is identified as the Modified Transverse Mercator projection. It resembles the Transverse Mercator in a very limited manner and cannot be considered a cylindrical projection. It resembles the Equidistant Conic projection for the ellipsoid in actual construction. The projection was also used in 1974 for a base map of the Aleutian-Bering Sea Region published at 1:2,500,000 scale.

It is found to be most closely equivalent to the Equidistant Conic for the Clarke 1866 ellipsoid, with the scale along the meridians reduced to 0.9992 of true scale, and the standard parallels at latitude 66.09° and 53.50°N.

PromptsThe following prompts display in the Projection Chooser if Modified Transverse Mercator is selected. Respond to the prompts as described.

False easting

False northing

Construction Cone

Property Equidistant

MeridiansOn pre-1973 editions of the Alaska Map E, meridians are curved concave toward the center of the projection. On post-1973 editions, the meridians are straight.

Parallels Parallels are arcs concave to the pole.

Graticule spacing

Meridian spacing is approximately equal and decreases toward the pole. Parallels are approximately equally spaced. The graticule is symmetrical on post-1973 editions of the Alaska Map E.

Linear scale Linear scale is more nearly correct along the meridians than along the parallels.

Uses

USGS’s Alaska Map E at the scale of 1:2,500,000. The Bathymetric Maps Eastern Continental Margin U.S.A., published by the American Association of Petroleum Geologists, uses the straight meridians on its Modified Transverse Mercator and is more equivalent to the Equidistant Conic map projection.

Page 401: ERDAS Field Guide

Map Projections 371

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Page 402: ERDAS Field Guide

372 Map Projections

Mollweide Carl B. Mollweide designed the projection in 1805. It is an equal-area pseudocylindrical projection. The Mollweide projection is used primarily for thematic maps of the world.

Source: Environmental Systems Research Institute, 1997

PromptsThe following prompts display in the Projection Chooser once Mollweide is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Enter a value for longitude of the desired central meridian. False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Construction Pseudocylinder

Property Equal-area

MeridiansMeridians are elliptical arcs that are equally spaced. The exception is the central meridian, which is a straight line.

Parallels All parallels are straight lines.

Graticule spacing

The Equator and the central meridian are linear graticules.

Linear scaleScale is accurate along latitudes 40° 44’ N and S at the central meridian. Distortion becomes more pronounced farther from the lines, and is severe at the extremes of the projection.

Uses Use for world maps only.

Page 403: ERDAS Field Guide

Map Projections 373

Figure 113: Mollweide Projection

Source: Snyder and Voxland, 1989

Page 404: ERDAS Field Guide

374 Map Projections

New Zealand Map Grid

This projection is used only for mapping New Zealand.

Source: Environmental Systems Research Institute, 1997

PromptsThe following prompts display in the Projection Chooser once New Zealand Map Grid is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

The Spheroid Name defaults to International 1924. The Datum Name defaults to Geodetic Datum 1949. These fields are not editable.You can define this projection either by specifying absolute easting and northing values or by specifying shifts relative to the easting and northing normally used for this projection.If using the relative method, the following prompts are displayed (under most circumstances, no shift would be specified, i.e., the values would remain 0.0)

Specify absolute or relative eastng/northing: Relative

Easting Shift (from 2510000.0): 0.000000 meters

Northing Shift (from 6023150.0): 0.000000 meters

If using the absolute method, the following prompts are displayed (the values shown below reflect the definition of EPSG Code 27200):

Specify absolute or relative eastng/northing: Absolute

Longitude of origin of projection: 173:00:00.000000E

Latitude of origin of projection: 41:00:00.000000S

Easting: 2510000.000000 meters

Northing: 6023150.000000 meters

Construction Modified cylindrical

Property Conformal

Meridians N/A

Parallels N/A

Graticule spacing None

Linear scale Scale is within 0.02 percent of actual scale for the country of New Zealand.

Uses This projection is useful only for maps of New Zealand.

Page 405: ERDAS Field Guide

Map Projections 375

Oblated Equal Area

PromptsThe following prompts display in the Projection Chooser once Oblated Equal Area is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.Parameter M

Parameter N

Enter the oblated equal area oval shape of parameters M and N.Longitude of center of projection

Latitude of center of projection

Enter the longitude of the center of the projection and the latitude of the center of the projection.

Rotation angle

Enter the oblated equal area oval rotation angle.False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Page 406: ERDAS Field Guide

376 Map Projections

Oblique Mercator (Hotine)

Oblique Mercator is a cylindrical, conformal projection that intersects the global surface along a great circle. It is equivalent to a Mercator projection that has been altered by rotating the cylinder so that the central line of the projection is a great circle path instead of the Equator. Shape is true only within any small area. Areal enlargement increases away from the line of tangency. Projection is reasonably accurate within a 15° band along the line of tangency.

The USGS uses the Hotine version of Oblique Mercator. The Hotine version is based on a study of conformal projections published by British geodesist Martin Hotine in 1946-47. Prior to the implementation of the Space Oblique Mercator, the Hotine version was used for mapping Landsat satellite imagery.

PromptsThe following prompts display in the Projection Chooser if Oblique Mercator (Hotine) is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

Construction Cylinder

Property Conformal

MeridiansMeridians are complex curves concave toward the line of tangency, except each 180th meridian is straight.

Parallels Parallels are complex curves concave toward the nearest pole.

Graticule spacing

Graticule spacing increases away from the line of tangency and retains the property of conformality.

Linear scaleLinear scale is true along the line of tangency, or along two lines of equidistance from and parallel to the line of tangency.

Uses

Useful for plotting linear configurations that are situated along a line oblique to the Earth’s Equator. Examples are: NASA Surveyor Satellite tracking charts, ERTS flight indexes, strip charts for navigation, and the National Geographic Society’s maps “West Indies,” “Countries of the Caribbean,” “Hawaii,” and “New Zealand.”

Page 407: ERDAS Field Guide

Map Projections 377

The list of available spheroids is located in Table 44 on page 243.

Scale factor at center

Designate the desired scale factor along the central line of the projection. This parameter may be used to modify scale distortion away from this central line. A value of 1.0 indicates true scale only along the central line. A value of less than, but close to one is often used to lessen scale distortion away from the central line.

Latitude of point of origin

False easting

False northing

The center of the projection is defined by rectangular coordinates of false easting and false northing. The origin of rectangular coordinates on this projection occurs at the nearest intersection of the central line with the Earth’s Equator. To shift the origin to the intersection of the latitude of the origin entered above and the central line of the projection, compute coordinates of the latter point with zero false eastings and northings, reverse the signs of the coordinates obtained, and use these for false eastings and northings. These values must be in meters. It is often convenient to add additional values so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Do you want to enter either:

A) Azimuth East of North for central line and the longitude of the point of origin

B) The latitude and longitude of the first and second points defining the central line

These formats differ slightly in definition of the central line of the projection.

Format A For format A, the additional prompts are:

Azimuth east of north for central line

Longitude of point of origin

Format A defines the central line of the projection by the angle east of north to the desired great circle path and by the latitude and longitude of the point along the great circle path from which the angle is measured. Appropriate values should be entered.

Page 408: ERDAS Field Guide

378 Map Projections

Format B For format B, the additional prompts are:

Longitude of 1st point

Latitude of 1st point

Longitude of 2nd point

Latitude of 2nd point

Format B defines the central line of the projection by the latitude of a point on the central line which has the desired scale factor entered previously and by the longitude and latitude of two points along the desired great circle path. Appropriate values should be entered.

Figure 114: Oblique Mercator Projection

Source: Snyder and Voxland, 1989

Page 409: ERDAS Field Guide

Map Projections 379

Orthographic The Orthographic projection is geometrically based on a plane tangent to the Earth, and the point of projection is at infinity (Figure 115 on page 381). The Earth appears as it would from outer space. Light rays that cast the projection are parallel and intersect the tangent plane at right angles. This projection is a truly graphic representation of the Earth, and is a projection in which distortion becomes a visual aid. It is the most familiar of the azimuthal map projections. Directions from the center of the projection are true.

This projection is limited to one hemisphere and shrinks those areas toward the periphery. In the polar aspect, latitude ring intervals decrease from the center outwards at a much greater rate than with Lambert Azimuthal. In the equatorial aspect, the central meridian and parallels are straight, with spaces closing up toward the outer edge.

Construction Plane

Property Compromise

Meridians

Polar aspect: the meridians are straight lines radiating from the point of tangency.

Oblique aspect: the meridians are ellipses, concave toward the center of the projection.

Equatorial aspect: the meridians are ellipses concave toward the straight central meridian.

Parallels

Polar aspect: the parallels are concentric circles.

Oblique aspect: the parallels are ellipses concave toward the poles.

Equatorial aspect: the parallels are straight and parallel.

Graticule spacing

Polar aspect: meridian spacing is equal and increases, and the parallel decreases from the point of tangency.

Oblique and equatorial aspects: the graticule spacing decreases away from the center of the projection.

Linear scaleScale is true on the parallels in the polar aspect and on all circles centered at the pole of the projection in all aspects. Scale decreases along lines radiating from the center of the projection.

Uses USGS uses the Orthographic map projection in the National Atlas.

Page 410: ERDAS Field Guide

380 Map Projections

The Orthographic projection seldom appears in atlases. Its utility is more pictorial than technical. Orthographic has been used as a basis for maps by Rand McNally and the USGS.

PromptsThe following prompts display in the Projection Chooser if Orthographic is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Define the center of the map projection in both spherical and rectangular coordinates.

Longitude of center of projection

Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.Three views of the Orthographic projection are shown in Figure 115 on page 381: A) Polar aspect; B) Equatorial aspect; C) Oblique aspect, centered at 40°N and showing the classic globe-like view.

Page 411: ERDAS Field Guide

Map Projections 381

Figure 115: Orthographic Projection

A

C

B

Page 412: ERDAS Field Guide

382 Map Projections

Plate Carrée The parameters for the Plate Carée projection are identical to that of the Equirectangular projection.

For more information, see Equirectangular (Plate Carrée) on page 336.

PromptsThe following prompts display in the Projection Chooser if Plate Carrée is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Latitude of true scale

Enter a value for longitude of the desired central meridian to center the projection and the latitude of true scale.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Figure 116: Plate Carrée Projection

Source: Snyder and Voxland, 1989

Page 413: ERDAS Field Guide

Map Projections 383

Polar Stereographic

The Polar Stereographic may be used to accommodate all regions not included in the UTM coordinate system, regions north of 84°N and 80°S. This form is called Universal Polar Stereographic (UPS). The projection is equivalent to the polar aspect of the Stereographic projection on a spheroid. The central point is either the North Pole or the South Pole. Of all the polar aspect planar projections, this is the only one that is conformal.

The point of tangency is a single point—either the North Pole or the South Pole. If the plane is secant instead of tangent, the point of global contact is a line of latitude (Environmental Systems Research Institute, 1992).Polar Stereographic is an azimuthal projection obtained by projecting from the opposite pole (Figure 117 on page 385). All of either the northern or southern hemispheres can be shown, but not both. This projection produces a circular map with one of the poles at the center. Polar Stereographic stretches areas toward the periphery, and scale increases for areas farther from the central pole. Meridians are straight and radiating; parallels are concentric circles. Even though scale and area are not constant with Polar Stereographic, this projection, like all stereographic projections, possesses the property of conformality. The Astrogeology Center of the Geological Survey at Flagstaff, Arizona, has been using the Polar Stereographic projection for the mapping of polar areas of every planet and satellite for which there is sufficient information.

Construction Plane

Property Conformal

Meridians Meridians are straight.

Parallels Parallels are concentric circles.

Graticule spacing

The distance between parallels increases with distance from the central pole.

Linear scaleThe scale increases with distance from the center. If a standard parallel is chosen rather than one of the poles, this latitude represents the true scale, and the scale nearer the pole is reduced.

UsesPolar regions (conformal). In the Universal Polar Stereographic (UPS) system, the scale factor at the pole is made 0.994, thus making the standard parallel (latitude of true scale) approximately 81°07’N or S.

Page 414: ERDAS Field Guide

384 Map Projections

PromptsThe following prompts display in the Projection Chooser if Polar Stereographic is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Define the origin of the map projection in both spherical and rectangular coordinates. Ellipsoid projections of the polar regions normally use the International 1909 spheroid (Environmental Systems Research Institute, 1992).

Longitude below pole of map

Enter a value for longitude directed straight down below the pole for a north polar aspect, or straight up from the pole for a south polar aspect. This is equivalent to centering the map with a desired meridian.

Latitude of true scale

Enter a value for latitude at which true scale is desired. For secant projections, specify the latitude of true scale as any line of latitude other than 90°N or S. For tangential projections, specify the latitude of true scale as the North Pole, 90 00 00, or the South Pole, -90 00 00 (Environmental Systems Research Institute, 1992).

False easting

False northing

Enter values of false easting and false northing corresponding to the pole. These values must be in meters. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.This projection is conformal and is the most scientific projection for polar regions.

Page 415: ERDAS Field Guide

Map Projections 385

Figure 117: Polar Stereographic Projection and its Geometric Construction

N. Pole

S. Pole

Plane of projection

Equator

Page 416: ERDAS Field Guide

386 Map Projections

Polyconic Polyconic was developed in 1820 by Ferdinand Hassler specifically for mapping the eastern coast of the US (Figure 118 on page 387). Polyconic projections are made up of an infinite number of conic projections tangent to an infinite number of parallels. These conic projections are placed in relation to a central meridian. Polyconic projections compromise properties such as equal-area and conformality, although the central meridian is held true to scale.

This projection is used mostly for north-south oriented maps. Distortion increases greatly the farther east and west an area is from the central meridian.

PromptsThe following prompts display in the Projection Chooser if Polyconic is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

Construction Cone

Property Compromise

Meridians The central meridian is a straight line, but all other meridians are complex curves.

Parallels Parallels (except the Equator) are nonconcentric circular arcs. The Equator is a straight line.

Graticule spacing

All parallels are arcs of circles, but not concentric. All meridians, except the central meridian, are concave toward the central meridian. Parallels cross the central meridian at equal intervals but get farther apart at the east and west peripheries.

Linear scale

The scale along each parallel and along the central meridian of the projection is accurate. It increases along the meridians as the distance from the central meridian increases (Environmental Systems Research Institute, 1992).

Uses

Used for 7.5-minute and 15-minute topographic USGS quad sheets, from 1886 to about 1957 (Environmental Systems Research Institute, 1992). Used almost exclusively in slightly modified form for large-scale mapping in the United States until the 1950s.

Page 417: ERDAS Field Guide

Map Projections 387

The list of available spheroids is located in Table 44 on page 243.

Define the origin of the map projection in both spherical and rectangular coordinates.

Longitude of central meridian

Latitude of origin of projection

Enter values for longitude of the desired central meridian and latitude of the origin of projection.

False easting at central meridian

False northing at origin

Enter values of false easting and false northing corresponding to the intersection of the central meridian and the latitude of the origin of projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Figure 118: Polyconic Projection of North America

In Figure 118, the central meridian is 100°W. This projection is used by the USGS for topographic quadrangle maps.

Page 418: ERDAS Field Guide

388 Map Projections

Quartic Authalic

Outer meridians at high latitudes have great distortion. If the Quartic Authalic projection is interrupted, distortion can be reduced.Source: Snyder and Voxland, 1989

PromptsThe following prompts display in the Projection Chooser once Quartic Authalic is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Enter the value of the longitude of the central meridian.False easting

False northing

Construction Pseudocylindrical

Property Equal-area

Meridians The central meridian is a straight line, and is 0.45 as long as the Equator. The other meridians are curves that are equally spaced. They fit a “fourth-order (quartic) equation and concave toward the central meridian” (Snyder and Voxland, 1989).

Parallels Parallels are straight parallel lines that are unequally spaced. The parallels have the greatest distance between in proximity to the Equator. Parallel spacing changes slowly, and parallels are perpendicular to the central meridian.

Graticule spacing See Meridians and Parallels. Poles are points. Symmetry exists about the central meridian or the Equator.

Linear scale Scale is accurate along the Equator. Scale is constant along each latitude, and is the same for the latitude of opposite sign.

Uses The McBryde-Thomas Flat-Polar Quartic projection uses Quartic Authalic as its base (Snyder and Voxland, 1989). Used for world maps.

Page 419: ERDAS Field Guide

Map Projections 389

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Figure 119: Quartic Authalic Projection

Source: Snyder and Voxland, 1989

Page 420: ERDAS Field Guide

390 Map Projections

Robinson According to ESRI, the Robinson “central meridian is a straight line 0.51 times the length of the Equator. Parallels are equally spaced straight lines between 38° N and S; spacing decreases beyond these limits. The poles are 0.53 times the length of the Equator. The projection is based upon tabular coordinates instead of mathematical formulas” (Environmental Systems Research Institute, 1997).

This projection has been used both by Rand McNally and the National Geographic Society.Source: Environmental Systems Research Institute, 1997

PromptsThe following prompts display in the Projection Chooser once Robinson is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Enter the value of the longitude of the central meridian.False easting

False northing

Construction Pseudocylinder

Property Neither conformal nor equal-area

Meridians Meridians are equally spaced, and concave toward the central meridian, and look like elliptical arcs (Environmental Systems Research Institute, 1997).

Parallels Parallels are equally spaced straight lines between 38° N and S.

Graticule spacing The central meridian and all parallels are linear.

Linear scale Scale is true along latitudes 38° N and S. Scale is constant along any specific latitude, and for the latitude of opposite sign.

Uses Useful for thematic and common world maps.

Page 421: ERDAS Field Guide

Map Projections 391

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Figure 120: Robinson Projection

Source: Snyder and Voxland, 1989

Page 422: ERDAS Field Guide

392 Map Projections

RSO The acronym RSO stands for Rectified Skewed Orthomorphic. This projection is used to map areas of Brunei and Malaysia, and is each country’s national projection.

Source: Environmental Systems Research Institute, 1997

PromptsThe following prompts display in the Projection Chooser once RSO is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

RSO Type

Select the RSO Type. You can choose from Borneo or Malaysia.

Construction Cylinder

Property Conformal

Meridians Two meridians are 180 degrees apart.

Parallels N/A

Graticule spacing Graticules are two meridians 180 degrees apart.

Linear scale “A line of true scale is drawn at an angle to the central meridian” (Environmental Systems Research Institute, 1997).

Uses This projection should be used to map areas of Brunei and Malaysia.

Page 423: ERDAS Field Guide

Map Projections 393

Sinusoidal Sometimes called the Sanson-Flamsteed, Sinusoidal is a projection with some characteristics of a cylindrical projection—often called a pseudocylindrical type. The central meridian is the only straight meridian—all others become sinusoidal curves. All parallels are straight and the correct length. Parallels are also the correct distance from the Equator, which, for a complete world map, is twice as long as the central meridian.

Sinusoidal maps achieve the property of equal-area but not conformality. The Equator and central meridian are distortion free, but distortion becomes pronounced near outer meridians, especially in polar regions. Interrupting a Sinusoidal world or hemisphere map can lessen distortion. The interrupted Sinusoidal contains less distortion because each interrupted area can be constructed to contain a separate central meridian. Central meridians may be different for the northern and southern hemispheres and may be selected to minimize distortion of continents or oceans. Sinusoidal is particularly suited for less than world areas, especially those bordering the Equator, such as South America or Africa. Sinusoidal is also used by the USGS as a base map for showing prospective hydrocarbon provinces and sedimentary basins of the world.

Construction Pseudocylinder

Property Equal-area

Meridians Meridians are sinusoidal curves, curved toward a straight central meridian.

Parallels All parallels are straight, parallel lines.

Graticule spacing

Meridian spacing is equal and decreases toward the poles. Parallel spacing is equal. The graticule spacing retains the property of equivalence of area.

Linear scale Linear scale is true on the parallels and the central meridian.

Uses

Used as an equal-area projection to portray areas that have a maximum extent in a north-south direction. Used as a world equal-area projection in atlases to show distribution patterns. Used by the USGS as the base for maps showing prospective hydrocarbon provinces of the world, and sedimentary basins of the world.

Page 424: ERDAS Field Guide

394 Map Projections

PromptsThe following prompts display in the Projection Chooser if Sinusoidal is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Enter a value for the longitude of the desired central meridian to center the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Figure 121: Sinusoidal Projection

Source: Snyder and Voxland, 1989

Page 425: ERDAS Field Guide

Map Projections 395

Space Oblique Mercator

The Space Oblique Mercator (SOM) projection is nearly conformal and has little scale distortion within the sensing range of an orbiting mapping satellite such as Landsat. It is the first projection to incorporate the Earth’s rotation with respect to the orbiting satellite.

The method of projection used is the modified cylindrical, for which the central line is curved and defined by the groundtrack of the orbit of the satellite.The line of tangency is conceptual and there are no graticules. The SOM projection is defined by USGS. According to USGS, the X axis passes through the descending node for each daytime scene. The Y axis is perpendicular to the X axis, to form a Cartesian coordinate system. The direction of the X axis in a daytime Landsat scene is in the direction of the satellite motion—south. The Y axis is directed east. For SOM projections used by EOSAT, the axes are switched; the X axis is directed east and the Y axis is directed south.The SOM projection is specifically designed to minimize distortion within sensing range of a mapping satellite as it orbits the Earth. It can be used for the rectification of, and continuous mapping from, satellite imagery. It is the standard format for data from Landsats 4 and 5. Plots for adjacent paths do not match without transformation (Environmental Systems Research Institute, 1991).

Construction Cylinder

Property Conformal

Meridians All meridians are curved lines except for the meridian crossed by the groundtrack at each polar approach.

Parallels All parallels are curved lines.

Graticule spacing

There are no graticules.

Linear scaleScale is true along the groundtrack, and varies approximately 0.01% within sensing range (Environmental Systems Research Institute, 1992).

UsesUsed for georectification of, and continuous mapping from, satellite imagery. Standard format for data from Landsats 4 and 5 (Environmental Systems Research Institute, 1992).

Page 426: ERDAS Field Guide

396 Map Projections

PromptsThe following prompts display in the Projection Chooser if Space Oblique Mercator is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Landsat vehicle ID (1-5)

Specify whether the data are from Landsat 1, 2, 3, 4, or 5.Orbital path number (1-251 or 1-233)

For Landsats 1, 2, and 3, the path range is from 1 to 251. For Landsats 4 and 5, the path range is from 1 to 233.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Figure 122: Space Oblique Mercator Projection

Source: Snyder and Voxland, 1989

Page 427: ERDAS Field Guide

Map Projections 397

Space Oblique Mercator (Formats A & B)

The Space Oblique Mercator (Formats A&B) projection is similar to the Space Oblique Mercator projection.

For more information, see Space Oblique Mercator on page 395.

PromptsThe following prompts display in the Projection Chooser once Space Oblique Mercator (Formats A & B) is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Format A (Generic Satellite)

Inclination of orbit at ascending node

Period of satellite revolution in minutes

Longitude of ascending orbit at equator

Landsat path flag

If you select Format A of the Space Oblique Mercator projection, you need to supply the information listed above.

Format B (Landsat)

Landsat vehicle ID (1-5)

Specify whether the data are from Landsat 1, 2, 3, 4, or 5.Path number (1-251 or 1-233)

For Landsats 1, 2, and 3, the path range is from 1 to 251. For Landsats 4 and 5, the path range is from 1 to 233.

Page 428: ERDAS Field Guide

398 Map Projections

State Plane The State Plane is an X,Y coordinate system (not a map projection); its zones divide the US into over 130 sections, each with its own projection surface and grid network (Figure 123 on page 399). With the exception of very narrow states, such as Delaware, New Jersey, and New Hampshire, most states are divided into multiple (2 - 10) zones. The Lambert Conformal projection is used for zones extending mostly in an east-west direction. The Transverse Mercator projection is used for zones extending mostly in a north-south direction. Alaska, Florida, and New York use either Transverse Mercator or Lambert Conformal for different areas. The Aleutian panhandle of Alaska is prepared on the Oblique Mercator projection. Zone boundaries follow state and county lines, and, because each zone is small, distortion is less than one in 10,000. Each zone has a centrally located origin and a central meridian that passes through this origin. Two zone numbering systems are currently in use—the USGS code system and the National Ocean Service (NOS) code system (Table 46, NAD27 State Plane Coordinate System for the United States, on page 399 and Table 47, NAD83 State Plane Coordinate System for the United States, on page 404), but other numbering systems exist.

PromptsThe following prompts appear in the Projection Chooser if State Plane is selected. Respond to the prompts as described.

State Plane Zone

Enter either the USGS zone code number as a positive value, or the NOS zone code number as a negative value.

NAD27 or NAD83 or HARN

Either North America Datum 1927 (NAD27), North America Datum 1983 (NAD83), or High Accuracy Reference Network (HARN) may be used to perform the State Plane calculations.

• NAD27 is based on the Clarke 1866 spheroid.

• NAD83 and HARN are based on the GRS 1980 spheroid. Some zone numbers have been changed or deleted from NAD27.

Tables for both NAD27 and NAD83 zone numbers follow (Table 46, NAD27 State Plane Coordinate System for the United States, on page 399 and Table 47, NAD83 State Plane Coordinate System for the United States, on page 404). These tables include both USGS and NOS code systems.

Page 429: ERDAS Field Guide

Map Projections 399

Figure 123: Zones of the State Plane Coordinate System

The following abbreviations are used in Table 46, NAD27 State Plane Coordinate System for the United States, on page 399 and Table 47, NAD83 State Plane Coordinate System for the United States, on page 404:

Tr Merc = Transverse MercatorLambert = Lambert Conformal ConicOblique = Oblique Mercator (Hotine)Polycon = Polyconic

Table 46: NAD27 State Plane Coordinate System for the United States

Code Number

State Zone Name Type USGS NOS

Alabama East Tr Merc 3101 -101

West Tr Merc 3126 -102

Alaska 1 Oblique 6101 -5001

2 Tr Merc 6126 -5002

3 Tr Merc 6151 -5003

4 Tr Merc 6176 -5004

5 Tr Merc 6201 -5005

6 Tr Merc 6226 -5006

7 Tr Merc 6251 -5007

8 Tr Merc 6276 -5008

9 Tr Merc 6301 -5009

10 Lambert 6326 -5010

American Samoa ------- Lambert ------ -5302

Page 430: ERDAS Field Guide

400 Map Projections

Arizona East Tr Merc 3151 -201

Central Tr Merc 3176 -202

West Tr Merc 3201 -203

Arkansas North Lambert 3226 -301

South Lambert 3251 -302

California I Lambert 3276 -401

II Lambert 3301 -402

III Lambert 3326 -403

IV Lambert 3351 -404

V Lambert 3376 -405

VI Lambert 3401 -406

VII Lambert 3426 -407

Colorado North Lambert 3451 -501

Central Lambert 3476 -502

South Lambert 3501 -503

Connecticut -------- Lambert 3526 -600

Delaware -------- Tr Merc 3551 -700

District of Columbia Use Maryland or Virginia North

Florida East Tr Merc 3601 -901

West Tr Merc 3626 -902

North Lambert 3576 -903

Georgia East Tr Merc 3651 -1001

West Tr Merc 3676 -1002

Guam ------- Polycon ------- -5400

Hawaii 1 Tr Merc 5876 -5101

2 Tr Merc 5901 -5102

3 Tr Merc 5926 -5103

4 Tr Merc 5951 -5104

5 Tr Merc 5976 -5105

Idaho East Tr Merc 3701 -1101

Central Tr Merc 3726 -1102

West Tr Merc 3751 -1103

Table 46: NAD27 State Plane Coordinate System for the United States

Code Number

State Zone Name Type USGS NOS

Page 431: ERDAS Field Guide

Map Projections 401

Illinois East Tr Merc 3776 -1201

West Tr Merc 3801 -1202

Indiana East Tr Merc 3826 -1301

West Tr Merc 3851 -1302

Iowa North Lambert 3876 -1401

South Lambert 3901 -1402

Kansas North Lambert 3926 -1501

South Lambert 3951 -1502

Kentucky North Lambert 3976 -1601

South Lambert 4001 -1602

Louisiana North Lambert 4026 -1701

South Lambert 4051 -1702

Offshore Lambert 6426 -1703

Maine East Tr Merc 4076 -1801

West Tr Merc 4101 -1802

Maryland ------- Lambert 4126 -1900

Massachusetts Mainland Lambert 4151 -2001

Island Lambert 4176 -2002

Michigan (Tr Merc) East Tr Merc 4201 -2101

Central Tr Merc 4226 -2102

West Tr Merc 4251 -2103

Michigan (Lambert) North Lambert 6351 -2111

Central Lambert 6376 -2112

South Lambert 6401 -2113

Minnesota North Lambert 4276 -2201

Central Lambert 4301 -2202

South Lambert 4326 -2203

Mississippi East Tr Merc 4351 -2301

West Tr Merc 4376 -2302

Missouri East Tr Merc 4401 -2401

Central Tr Merc 4426 -2402

West Tr Merc 4451 -2403

Table 46: NAD27 State Plane Coordinate System for the United States

Code Number

State Zone Name Type USGS NOS

Page 432: ERDAS Field Guide

402 Map Projections

Montana North Lambert 4476 -2501

Central Lambert 4501 -2502

South Lambert 4526 -2503

Nebraska North Lambert 4551 -2601

South Lambert 4576 -2602

Nevada East Tr Merc 4601 -2701

Central Tr Merc 4626 -2702

West Tr Merc 4651 -2703

New Hampshire --------- Tr Merc 4676 -2800

New Jersey --------- Tr Merc 4701 -2900

New Mexico East Tr Merc 4726 -3001

Central Tr Merc 4751 -3002

West Tr Merc 4776 -3003

New York East Tr Merc 4801 -3101

Central Tr Merc 4826 -3102

West Tr Merc 4851 -3103

Long Island Lambert 4876 -3104

North Carolina -------- Lambert 4901 -3200

North Dakota North Lambert 4926 -3301

South Lambert 4951 -3302

Ohio North Lambert 4976 -3401

South Lambert 5001 -3402

Oklahoma North Lambert 5026 -3501

South Lambert 5051 -3502

Oregon North Lambert 5076 -3601

South Lambert 5101 -3602

Pennsylvania North Lambert 5126 -3701

South Lambert 5151 -3702

Puerto Rico -------- Lambert 6001 -5201

Rhode Island -------- Tr Merc 5176 -3800

South Carolina North Lambert 5201 -3901

South Lambert 5226 -3902

Table 46: NAD27 State Plane Coordinate System for the United States

Code Number

State Zone Name Type USGS NOS

Page 433: ERDAS Field Guide

Map Projections 403

South Dakota North Lambert 5251 -4001

South Lambert 5276 -4002

St. Croix --------- Lambert 6051 -5202

Tennessee --------- Lambert 5301 -4100

Texas North Lambert 5326 -4201

North Central Lambert 5351 -4202

Central Lambert 5376 -4203

South Central Lambert 5401 -4204

South Lambert 5426 -4205

Utah North Lambert 5451 -4301

Central Lambert 5476 -4302

South Lambert 5501 -4303

Vermont -------- Tr Merc 5526 -4400

Virginia North Lambert 5551 -4501

South Lambert 5576 -4502

Virgin Islands -------- Lambert 6026 -5201

Washington North Lambert 5601 -4601

South Lambert 5626 -4602

West Virginia North Lambert 5651 -4701

South Lambert 5676 -4702

Wisconsin North Lambert 5701 -4801

Central Lambert 5726 -4802

South Lambert 5751 -4803

Wyoming East Tr Merc 5776 -4901

East Central Tr Merc 5801 -4902

West Central Tr Merc 5826 -4903

West Tr Merc 5851 -4904

Table 46: NAD27 State Plane Coordinate System for the United States

Code Number

State Zone Name Type USGS NOS

Page 434: ERDAS Field Guide

404 Map Projections

Table 47: NAD83 State Plane Coordinate System for the United States

Code Number

State Zone Name Type USGS NOS

Alabama East Tr Merc 3101 -101

West Tr Merc 3126 -102

Alaska 1 Oblique 6101 -5001

2 Tr Merc 6126 -5002

3 Tr Merc 6151 -5003

4 Tr Merc 6176 -5004

5 Tr Merc 6201 -5005

6 Tr Merc 6226 -5006

7 Tr Merc 6251 -5007

8 Tr Merc 6276 -5008

9 Tr Merc 6301 -5009

10 Lambert 6326 -5010

Arizona East Tr Merc 3151 -201

Central Tr Merc 3176 -202

West Tr Merc 3201 -203

Arkansas North Lambert 3226 -301

South Lambert 3251 -302

California I Lambert 3276 -401

II Lambert 3301 -402

III Lambert 3326 -403

IV Lambert 3351 -404

V Lambert 3376 -405

VI Lambert 3401 -406

Colorado North Lambert 3451 -501

Central Lambert 3476 -502

South Lambert 3501 -503

Connecticut -------- Lambert 3526 -600

Delaware -------- Tr Merc 3551 -700

District of Columbia Use Maryland or Virginia North

Florida East Tr Merc 3601 -901

West Tr Merc 3626 -902

North Lambert 3576 -903

Page 435: ERDAS Field Guide

Map Projections 405

Georgia East Tr Merc 3651 -1001

West Tr Merc 3676 -1002

Hawaii 1 Tr Merc 5876 -5101

2 Tr Merc 5901 -5102

3 Tr Merc 5926 -5103

4 Tr Merc 5951 -5104

5 Tr Merc 5976 -5105

Idaho East Tr Merc 3701 -1101

Central Tr Merc 3726 -1102

West Tr Merc 3751 -1103

Illinois East Tr Merc 3776 -1201

West Tr Merc 3801 -1202

Indiana East Tr Merc 3826 -1301

West Tr Merc 3851 -1302

Iowa North Lambert 3876 -1401

South Lambert 3901 -1402

Kansas North Lambert 3926 -1501

South Lambert 3951 -1502

Kentucky North Lambert 3976 -1601

South Lambert 4001 -1602

Louisiana North Lambert 4026 -1701

South Lambert 4051 -1702

Offshore Lambert 6426 -1703

Maine East Tr Merc 4076 -1801

West Tr Merc 4101 -1802

Maryland ------- Lambert 4126 -1900

Massachusetts Mainland Lambert 4151 -2001

Island Lambert 4176 -2002

Michigan North Lambert 6351 -2111

Central Lambert 6376 -2112

South Lambert 6401 -2113

Table 47: NAD83 State Plane Coordinate System for the United States

Code Number

State Zone Name Type USGS NOS

Page 436: ERDAS Field Guide

406 Map Projections

Minnesota North Lambert 4276 -2201

Central Lambert 4301 -2202

South Lambert 4326 -2203

Mississippi East Tr Merc 4351 -2301

West Tr Merc 4376 -2302

Missouri East Tr Merc 4401 -2401

Central Tr Merc 4426 -2402

West Tr Merc 4451 -2403

Montana --------- Lambert 4476 -2500

Nebraska --------- Lambert 4551 -2600

Nevada East Tr Merc 4601 -2701

Central Tr Merc 4626 -2702

West Tr Merc 4651 -2703

New Hampshire --------- Tr Merc 4676 -2800

New Jersey --------- Tr Merc 4701 -2900

New Mexico East Tr Merc 4726 -3001

Central Tr Merc 4751 -3002

West Tr Merc 4776 -3003

New York East Tr Merc 4801 -3101

Central Tr Merc 4826 -3102

West Tr Merc 4851 -3103

Long Island Lambert 4876 -3104

North Carolina --------- Lambert 4901 -3200

North Dakota North Lambert 4926 -3301

South Lambert 4951 -3302

Ohio North Lambert 4976 -3401

South Lambert 5001 -3402

Oklahoma North Lambert 5026 -3501

South Lambert 5051 -3502

Oregon North Lambert 5076 -3601

South Lambert 5101 -3602

Table 47: NAD83 State Plane Coordinate System for the United States

Code Number

State Zone Name Type USGS NOS

Page 437: ERDAS Field Guide

Map Projections 407

Pennsylvania North Lambert 5126 -3701

South Lambert 5151 -3702

Puerto Rico --------- Lambert 6001 -5201

Rhode Island --------- Tr Merc 5176 -3800

South Carolina --------- Lambert 5201 -3900

South Dakota --------- Lambert 5251 -4001

South Lambert 5276 -4002

Tennessee --------- Lambert 5301 -4100

Texas North Lambert 5326 -4201

North Central Lambert 5351 -4202

Central Lambert 5376 -4203

South Central Lambert 5401 -4204

South Lambert 5426 -4205

Utah North Lambert 5451 -4301

Central Lambert 5476 -4302

South Lambert 5501 -4303

Vermont --------- Tr Merc 5526 -4400

Virginia North Lambert 5551 -4501

South Lambert 5576 -4502

Virgin Islands --------- Lambert 6026 -5201

Washington North Lambert 5601 -4601

South Lambert 5626 -4602

West Virginia North Lambert 5651 -4701

South Lambert 5676 -4702

Wisconsin North Lambert 5701 -4801

Central Lambert 5726 -4802

South Lambert 5751 -4803

Wyoming East Tr Merc 5776 -4901

East Central Tr Merc 5801 -4902

West Central Tr Merc 5826 -4903

West Tr Merc 5851 -4904

Table 47: NAD83 State Plane Coordinate System for the United States

Code Number

State Zone Name Type USGS NOS

Page 438: ERDAS Field Guide

408 Map Projections

Stereographic Stereographic is a perspective projection in which points are projected from a position on the opposite side of the globe onto a plane tangent to the Earth (Figure 124 on page 410). All of one hemisphere can easily be shown, but it is impossible to show both hemispheres in their entirety from one center. It is the only azimuthal projection that preserves truth of angles and local shape. Scale increases and parallels become more widely spaced farther from the center.

In the equatorial aspect, all parallels except the Equator are circular arcs. In the polar aspect, latitude rings are spaced farther apart, with increasing distance from the pole.

Construction Plane

Property Conformal

Meridians

Polar aspect: the meridians are straight lines radiating from the point of tangency.

Oblique and equatorial aspects: the meridians are arcs of circles concave toward a straight central meridian. In the equatorial aspect, the outer meridian of the hemisphere is a circle centered at the projection center.

Parallels

Polar aspect: the parallels are concentric circles.

Oblique aspect: the parallels are nonconcentric arcs of circles concave toward one of the poles with one parallel being a straight line.

Equatorial aspect: parallels are nonconcentric arcs of circles concave toward the poles; the Equator is straight.

Graticule spacing

The graticule spacing increases away from the center of the projection in all aspects and it retains the property of conformality.

Linear scale Scale increases toward the periphery of the projection.

Uses

The Stereographic projection is the most widely used azimuthal projection, mainly used for portraying large, continent-sized areas of similar extent in all directions. It is used in geophysics for solving problems in spherical geometry. The polar aspect is used for topographic maps and navigational charts. The American Geographical Society uses this projection as the basis for its “Map of the Arctic.” The USGS uses it as the basis for maps of Antarctica.

Page 439: ERDAS Field Guide

Map Projections 409

PromptsThe following prompts display in the Projection Chooser if Stereographic is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Define the center of the map projection in both spherical and rectangular coordinates.

Longitude of center of projection

Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.The Stereographic is the only azimuthal projection which is conformal. Figure 124 on page 410 shows two views: A) Equatorial aspect, often used in the 16th and 17th centuries for maps of hemispheres; and B) Oblique aspect, centered on 40°N.

Page 440: ERDAS Field Guide

410 Map Projections

Figure 124: Stereographic Projection

A

B

Page 441: ERDAS Field Guide

Map Projections 411

Stereographic (Extended)

The Stereographic (Extended) projection has the same attributes as the Stereographic projection, with the exception of the ability to define scale factors.

For details about the Stereographic projection, see Stereographic on page 408.

PromptsThe following prompts display in the Projection Chooser once Stereographic (Extended) is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Scale factor

Designate the desired scale factor. This parameter is used to modify scale distortion. A value of one indicates true scale only along the central meridian. It may be desirable to have true scale along two lines equidistant from and parallel to the central meridian, or to lessen scale distortion away from the central meridian. A factor of less than, but close to one is often used.

Longitude of origin of projection

Latitude of origin of projection

Enter the values for longitude of origin of projection and latitude of origin of projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Page 442: ERDAS Field Guide

412 Map Projections

Transverse Mercator

Transverse Mercator is similar to the Mercator projection except that the axis of the projection cylinder is rotated 90° from the vertical (polar) axis. The contact line is then a chosen meridian instead of the Equator, and this central meridian runs from pole to pole. It loses the properties of straight meridians and straight parallels of the standard Mercator projection (except for the central meridian, the two meridians 90° away, and the Equator).

Transverse Mercator also loses the straight rhumb lines of the Mercator map, but it is a conformal projection. Scale is true along the central meridian or along two straight lines equidistant from, and parallel to, the central meridian. It cannot be edge-joined in an east-west direction if each sheet has its own central meridian.In the United States, Transverse Mercator is the projection used in the State Plane coordinate system for states with predominant north-south extent. The entire Earth from 84°N to 80°S is mapped with a system of projections called the Universal Transverse Mercator.

Construction Cylinder

Property Conformal

MeridiansMeridians are complex curves concave toward a straight central meridian that is tangent to the globe. The straight central meridian intersects the Equator and one meridian at a 90° angle.

Parallels Parallels are complex curves concave toward the nearest pole; the Equator is straight.

Graticule spacing

Parallels are spaced at their true distances on the straight central meridian. Graticule spacing increases away from the tangent meridian. The graticule retains the property of conformality.

Linear scaleLinear scale is true along the line of tangency, or along two lines equidistant from, and parallel to, the line of tangency.

Uses

Used where the north-south dimension is greater than the east west dimension. Used as the base for the USGS 1:250,000-scale series, and for some of the 7.5-minute and 15-minute quadrangles of the National Topographic Map Series.

Page 443: ERDAS Field Guide

Map Projections 413

PromptsThe following prompts display in the Projection Chooser if Transverse Mercator is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Scale factor at central meridian

Designate the desired scale factor at the central meridian. This parameter is used to modify scale distortion. A value of one indicates true scale only along the central meridian. It may be desirable to have true scale along two lines equidistant from and parallel to the central meridian, or to lessen scale distortion away from the central meridian. A factor of less than, but close to one is often used. Finally, define the origin of the map projection in both spherical and rectangular coordinates.

Longitude of central meridian

Latitude of origin of projection

Enter values for longitude of the desired central meridian and latitude of the origin of projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the intersection of the central meridian and the latitude of the origin of projection. These values must be in meters. It is often convenient to make them large enough so that there are no negative coordinates within the region of the map projection. That is, origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Page 444: ERDAS Field Guide

414 Map Projections

Two Point Equidistant

The Two Point Equidistant projection is used to show the distance from “either of two chosen points to any other point on a map” (Environmental Systems Research Institute, 1997). Note that the first point has to be west of the second point. This projection has been used by the National Geographic Society to map areas of Asia.

Source: Environmental Systems Research Institute, 1997

PromptsThe following prompts display in the Projection Chooser once Two Point Equidistant is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

False easting

False northing

Construction Modified planar

Property Compromise

Meridians N/A

Parallels N/A

Graticule spacing

N/A

Linear scale N/A

Uses

The Two Point Equidistant projection “does not represent great circle paths” (Environmental Systems Research Institute, 1997). There is little distortion if two chosen points are within 45 degrees of each other.

Page 445: ERDAS Field Guide

Map Projections 415

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Longitude of 1st point

Latitude of 1st point

Enter the longitude and latitude values of the first point.Longitude of 2nd point

Latitude of 2nd point

Enter the longitude and latitude values of the second point.

Figure 125: Two Point Equidistant Projection

Source: Snyder and Voxland, 1989

Page 446: ERDAS Field Guide

416 Map Projections

UTM Universal Transverse Mercator (UTM) is an international plane (rectangular) coordinate system developed by the US Army that extends around the world from 84°N to 80°S. The world is divided into 60 zones each covering six degrees longitude. Each zone extends three degrees eastward and three degrees westward from its central meridian. Zones are numbered consecutively west to east from the 180° meridian (Figure 126, Table 48 on page 417).The Transverse Mercator projection is then applied to each UTM zone. Transverse Mercator is a transverse form of the Mercator cylindrical projection. The projection cylinder is rotated 90° from the vertical (polar) axis and can then be placed to intersect at a chosen central meridian. The UTM system specifies the central meridian of each zone. With a separate projection for each UTM zone, a high degree of accuracy is possible (one part in 1000 maximum distortion within each zone). If the map to be projected extends beyond the border of the UTM zone, the entire map may be projected for any UTM zone specified by you.

See Transverse Mercator on page 412 for more information.

PromptsThe following prompts display in the Projection Chooser if UTM is chosen.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

UTM Zone

North or South

Page 447: ERDAS Field Guide

Map Projections 417

Figure 126: Zones of the Universal Transverse Mercator Grid in the United States

All values in Table 48 are in full degrees east (E) or west (W) of the Greenwich prime meridian (0).

126° 120° 114° 108° 102° 96° 90° 84° 78° 72° 66°

Table 48: UTM Zones, Central Meridians, and Longitude Ranges

Zone Central Meridian Range Zone Central

Meridian Range

1 177W 180W-174W 31 3E 0-6E

2 171W 174W-168W 32 9E 6E-12E

3 165W 168W-162W 33 15E 12E-18E

4 159W 162W-156W 34 21E 18E-24E

5 153W 156W-150W 35 27E 24E-30E

6 147W 150W-144W 36 33E 30E-36E

7 141W 144W-138W 37 39E 36E-42E

8 135W 138W-132W 38 45E 42E-48E

9 129W 132W-126W 39 51E 48E-54E

10 123W 126W-120W 40 57E 54E-60E

11 117W 120W-114W 41 63E 60E-66E

12 111W 114W-108W 42 69E 66E-72E

13 105W 108W-102W 43 75E 72E-78E

14 99W 102W-96W 44 81E 78E-84E

15 93W 96W-90W 45 87E 84E-90E

16 87W 90W-84W 46 93E 90E-96E

17 81W 84W-78W 47 99E 96E-102E

Page 448: ERDAS Field Guide

418 Map Projections

18 75W 78W-72W 48 105E 102E-108E

19 69W 72W-66W 49 111E 108E-114E

20 63W 66W-60W 50 117E 114E-120E

21 57W 60W-54W 51 123E 120E-126E

22 51W 54W-48W 52 129E 126E-132E

23 45W 48W-42W 53 135E 132E-138E

24 39W 42W-36W 54 141E 138E-144E

25 33W 36W-30W 55 147E 144E-150E

26 27W 30W-24W 56 153E 150E-156E

27 21W 24W-18W 57 159E 156E-162E

28 15W 18W-12W 58 165E 162E-168E

29 9W 12W-6W 59 171E 168E-174E

30 3W 6W-0 60 177E 174E-180E

Table 48: UTM Zones, Central Meridians, and Longitude Ranges (Continued)

Zone Central Meridian Range Zone Central

Meridian Range

Page 449: ERDAS Field Guide

Map Projections 419

Van der Grinten I The Van der Grinten I projection produces a map that is neither conformal nor equal-area (Figure 127 on page 420). It compromises all properties, and represents the Earth within a circle.

All lines are curved except the central meridian and the Equator. Parallels are spaced farther apart toward the poles. Meridian spacing is equal at the Equator. Scale is true along the Equator, but increases rapidly toward the poles, which are usually not represented. Van der Grinten I avoids the excessive stretching of the Mercator and the shape distortion of many of the equal-area projections. It has been used to show distribution of mineral resources on the ocean floor.

PromptsThe following prompts display in the Projection Chooser if Van der Grinten I is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Construction Miscellaneous

Property Compromise

Meridians Meridians are circular arcs concave toward a straight central meridian.

Parallels Parallels are circular arcs concave toward the poles, except for a straight Equator.

Graticule spacing

Meridian spacing is equal at the Equator. The parallels are spaced farther apart toward the poles. The central meridian and Equator are straight lines. The poles are commonly not represented. The graticule spacing results in a compromise of all properties.

Linear scale Linear scale is true along the Equator. Scale increases rapidly toward the poles.

UsesThe Van der Grinten projection is used by the National Geographic Society for world maps. Used by the USGS to show distribution of mineral resources on the sea floor.

Page 450: ERDAS Field Guide

420 Map Projections

Enter a value for the longitude of the desired central meridian to center the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of the projection. These values must be in meters. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Figure 127: Van der Grinten I Projection

The Van der Grinten I projection resembles the Mercator, but it is not conformal.

Page 451: ERDAS Field Guide

Map Projections 421

Wagner IV

The Wagner IV Projection has distortion primarily in the polar regions. Source: Snyder and Voxland, 1989

PromptsThe following prompts display in the Projection Chooser if Wagner IV is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Enter a value for the longitude of the desired central meridian to center the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of the projection. These values must be in meters. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Construction Pseudocylinder

Property Equal-area

Meridians The central meridian is a straight line one half as long as the Equator. The other meridians are portions of ellipses that are equally spaced. They are concave towards the central meridian. The meridians at 103° 55’ E and W of the central meridian are circular arcs.

Parallels Parallels are unequally spaced. Parallels have the widest space between them at the Equator, and are perpendicular to the central meridian.

Graticule spacing See Meridians and Parallels. Poles are lines one half as long as the Equator. Symmetry exists around the central meridian or the Equator.

Linear scale Scale is accurate along latitudes 42° 59’ N and S. Scale is constant along any specific latitude as well as the latitude of opposite sign.

Uses Useful for world maps.

Page 452: ERDAS Field Guide

422 Map Projections

Figure 128: Wagner IV Projection

Source: Snyder and Voxland, 1989

Page 453: ERDAS Field Guide

Map Projections 423

Wagner VII The Wagner VII projection is modified based on the Hammer projection. “The poles correspond to the 65th parallels on the Hammer [projection], and meridians are repositioned” (Snyder and Voxland, 1989).

Distortion is prevalent in polar areas. Source: Snyder and Voxland, 1989

PromptsThe following prompts display in the Projection Chooser if Wagner IV is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Longitude of central meridian

Enter a value for the longitude of the desired central meridian to center the projection.

False easting

False northing

Construction Modified azimuthal

Property Equal-area

Meridians Central meridian is straight and half the Equator’s length. Other meridians are unequally spaced curves. They are concave toward the central meridian.

Parallels The Equator is straight; the other parallels are curved. Other parallels are unequally spaced curves, which are concave toward the closest pole.

Graticule spacing See Meridians and Parallels. Poles are curved lines. Symmetry exists about the central meridian or the Equator.

Linear scale Scale decreases along the central meridian and the Equator relative to distance from the center of the Wagner VII projection.

Uses Used for world maps.

Page 454: ERDAS Field Guide

424 Map Projections

Enter values of false easting and false northing corresponding to the center of the projection. These values must be in meters. It is often convenient to make them large enough to prevent negative coordinates within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Figure 129: Wagner VII Projection

Source: Snyder and Voxland, 1989

Page 455: ERDAS Field Guide

Map Projections 425

Winkel I

The Winkel I projection is “not free of distortion at any point” (Snyder and Voxland, 1989).Source: Snyder and Voxland, 1989

PromptsThe following prompts display in the Projection Chooser once Winkel I is selected. Respond to the prompts as described.

Spheroid Name

Datum Name

Select the spheroid and datum to use.

The list of available spheroids is located in Table 44 on page 243.

Latitude of standard parallel

Longitude of central meridian

Enter values of the latitude of standard parallel and the longitude of central meridian.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center of the projection. These values must be in meters. It is often convenient to make them large enough so that no negative coordinates occur within the region of the map projection. That is, the origin of the rectangular coordinate system should fall outside of the map projection to the south and west.

Construction Pseudocylinder

Property Neither conformal nor equal-area

Meridians Central meridian is a straight line 0.61 the length of the Equator. The other meridians are sinusoidal curves that are equally spaced and concave toward the central meridian.

Parallels Parallels are equally spaced.

Graticule spacing See Meridians and Parallels. Pole lines are 0.61 the length of the Equator. Symmetry exists about the central meridian or the Equator.

Linear scale Scale is true along latitudes 50° 28’ N and S. Scale is constant along any given latitude as well as the latitude of the opposite sign.

Uses Used for world maps.

Page 456: ERDAS Field Guide

426 Map Projections

Figure 130: Winkel I Projection

Source: Snyder and Voxland, 1989

Page 457: ERDAS Field Guide

Map Projections 427

External Projections

The following external projections are supported in ERDAS IMAGINE and are described in this section. Some of these projections were discussed in the previous section. Those descriptions are not repeated here. Simply refer to the page number in parentheses for more information.

NOTE: ERDAS IMAGINE does not support datum shifts for these external projections.

• Albers Equal Area (see Albers Conical Equal Area on page 303)

• Azimuthal Equidistant (see Azimuthal Equidistant on page 306)

• Bipolar Oblique Conic Conformal

• Cassini-Soldner

• Conic Equidistant (see Equidistant Conic on page 333)

• Laborde Oblique Mercator

• Lambert Azimuthal Equal Area (see Lambert Azimuthal Equal Area on page 353)

• Lambert Conformal Conic (see Lambert Conformal Conic on page 356)

• Mercator (see Mercator on page 363)

• Minimum Error Conformal

• Modified Polyconic

• Modified Stereographic

• Mollweide Equal Area (see Mollweide on page 372)

• Oblique Mercator (see Oblique Mercator (Hotine) on page 376)

• Orthographic (see Orthographic on page 379)

• Plate Carrée (see Equirectangular (Plate Carrée) on page 336)

• Rectified Skew Orthomorphic (see RSO on page 392)

• Regular Polyconic (see Polyconic on page 386)

• Robinson Pseudocylindrical (see Robinson on page 390)

Page 458: ERDAS Field Guide

428 Map Projections

• Sinusoidal (see Sinusoidal on page 393)

• Southern Orientated Gauss Conformal

• Stereographic (see Stereographic on page 408)

• Swiss Cylindrical

• Stereographic (Oblique) (see Stereographic on page 408)

• Transverse Mercator (see Transverse Mercator on page 412)

• Universal Transverse Mercator (see UTM on page 416)

• Van der Grinten (see Van der Grinten I on page 419)

• Winkel’s Tripel

Page 459: ERDAS Field Guide

Map Projections 429

Bipolar Oblique Conic Conformal

The Bipolar Oblique Conic Conformal projection was developed by O.M. Miller and William A. Briesemeister in 1941 specifically for mapping North and South America, and maintains conformality for these regions. It is based upon the Lambert Conformal Conic, using two oblique conic projections side-by-side.

The two oblique conics are joined with the poles 104° apart. A great circle arc 104° long begins at 20°S and 110°W, cuts through Central America, and terminates at 45°N and approximately 19°59’36”W. The scale of the map is then increased by approximately 3.5%. The origin of the coordinates is made 17°15’N, 73°02’W.

Refer to Lambert Conformal Conic on page 356 for more information.

PromptsThe following prompts display in the Projection Chooser if Bipolar Oblique Conic Conformal is selected.

Projection Name

Spheroid Type

Datum Name

Construction Cone

Property Conformal

Meridians Meridians are complex curves concave toward the center of the projection.

Parallels Parallels are complex curves concave toward the nearest pole.

Graticule spacing

Graticule spacing increases away from the lines of true scale and retains the property of conformality.

Linear scale

Linear scale is true along two lines that do not lie along any meridian or parallel. Scale is compressed between these lines and expanded beyond them. Linear scale is generally good, but there is as much as a 10% error at the edge of the projection as used.

UsesUsed to represent one or both of the American continents. Examples are the Basement map of North America and the Tectonic map of North America.

Page 460: ERDAS Field Guide

430 Map Projections

Cassini-Soldner The Cassini projection was devised by C. F. Cassini de Thury in 1745 for the survey of France. Mathematical analysis by J. G. von Soldner in the early 19th century led to more accurate ellipsoidal formulas. Today, it has largely been replaced by the Transverse Mercator projection, although it is still in limited use outside of the United States. It was one of the major topographic mapping projections until the early 20th century.

The spherical form of the projection bears the same relation to the Equidistant Cylindrical, or Plate Carrée, projection that the spherical Transverse Mercator bears to the regular Mercator. Instead of having the straight meridians and parallels of the Equidistant Cylindrical, the Cassini has complex curves for each, except for the Equator, the central meridian, and each meridian 90° away from the central meridian, all of which are straight.There is no distortion along the central meridian if it is maintained at true scale, which is the usual case. If it is given a reduced scale factor, the lines of true scale are two straight lines on the map, parallel to and equidistant from, the central meridian. There is no distortion along them instead.

Construction Cylinder

Property Compromise

MeridiansCentral meridian, each meridian 90° from the central meridian, and the Equator are straight lines. Other meridians are complex curves.

Parallels Parallels are complex curves.

Graticule spacing

Complex curves for all meridians and parallels, except for the Equator, the central meridian, and each meridian 90° away from the central meridian, all of which are straight.

Linear scale

Scale is true along the central meridian, and along lines perpendicular to the central meridian. Scale is constant but not true along lines parallel to the central meridian on the spherical form, and nearly so for the ellipsoid.

UsesUsed for topographic mapping, formerly in England and currently in a few other countries, such as Denmark, Germany, and Malaysia.

Page 461: ERDAS Field Guide

Map Projections 431

The scale is correct along the central meridian, and also along any straight line perpendicular to the central meridian. It gradually increases in a direction parallel to the central meridian as the distance from that meridian increases, but the scale is constant along any straight line on the map that is parallel to the central meridian. Therefore, Cassini-Soldner is more suitable for regions that are predominantly north-south in extent, such as Great Britain, than regions extending in other directions. The projection is neither equal-area nor conformal, but it has a compromise of both features.The Cassini-Soldner projection was adopted by the Ordnance Survey for the official survey of Great Britain during the second half of the 19th century. A system equivalent to the oblique Cassini-Soldner projection was used in early coordinate transformations for ERTS (now Landsat) satellite imagery, but it was changed to Oblique Mercator (Hotine) in 1978, and to the Space Oblique Mercator in 1982.

PromptsThe following prompts display in the Projection Chooser if Cassini-Soldner is selected.

Projection Name

Spheroid Type

Datum Name

Page 462: ERDAS Field Guide

432 Map Projections

Laborde Oblique Mercator

In 1928, Laborde combined a conformal sphere with a complex-algebra transformation of the Oblique Mercator projection for the topographic mapping of Madagascar. This variation is now known as the Laborde Oblique Mercator. The central line is a great circle arc.

See Oblique Mercator (Hotine) on page 376 for more information.

PromptsThe following prompts display in the Projection Chooser if Laborde Oblique Mercator is selected.

Projection Name

Spheroid Type

Datum Name

Page 463: ERDAS Field Guide

Map Projections 433

Minimum Error Conformal

The Minimum Error Conformal projection is the same as the New Zealand Map Grid projection.

For more information, see New Zealand Map Grid on page 374.

Page 464: ERDAS Field Guide

434 Map Projections

Modified Polyconic

The Modified Polyconic projection was devised by Lallemand of France, and in 1909 it was adopted by the International Map Committee (IMC) in London as the basis for the 1:1,000,000-scale International Map of the World (IMW) series.

The projection differs from the ordinary Polyconic in two principal features: all meridians are straight, and there are two meridians that are made true to scale. Adjacent sheets fit together exactly not only north to south, but east to west. There is still a gap when mosaicking in all directions, in that there is a gap between each diagonal sheet, and either one or the other adjacent sheet.In 1962, a U.N. conference on the IMW adopted the Lambert Conformal Conic and the Polar Stereographic projections to replace the Modified Polyconic.

See Polyconic on page 386 for more information.

PromptsThe following prompts display in the Projection Chooser if Modified Polyconic is selected.

Projection Name

Spheroid Type

Datum Name

Construction Cone

Property Compromise

Meridians All meridians are straight.

ParallelsParallels are circular arcs. The top and bottom parallels of each sheet are nonconcentric circular arcs.

Graticule spacing

The top and bottom parallels of each sheet are nonconcentric circular arcs. The two parallels are spaced from each other according to the true scale along the central meridian, which is slightly reduced.

Linear scale Scale is true along each parallel and along two meridians, but no parallel is standard.

Uses Used for the International Map of the World (IMW) series until 1962.

Page 465: ERDAS Field Guide

Map Projections 435

Modified Stereographic

The meridians and parallels of the Modified Stereographic projection are generally curved, and there is usually no symmetry about any point or line. There are limitations to these transformations. Most of them can only be used within a limited range. As the distance from the projection center increases, the meridians, parallels, and shorelines begin to exhibit loops, overlapping, and other undesirable curves. A world map using the GS50 (50-State) projection is almost illegible with meridians and parallels intertwined like wild vines.

PromptsThe following prompts display in the Projection Chooser if Modified Stereographic is selected.

Projection Name

Spheroid Type

Datum Name

Construction Plane

Property Conformal

Meridians All meridians are normally complex curves, although some may be straight under certain conditions.

Parallels All parallels are complex curves, although some may be straight under certain conditions.

Graticule spacing

The graticule is normally not symmetrical about any axis or point.

Linear scaleScale is true along irregular lines, but the map is usually designed to minimize scale variation throughout a selected region.

UsesUsed for maps of continents in the Eastern Hemisphere, for the Pacific Ocean, and for maps of Alaska and the 50 United States.

Page 466: ERDAS Field Guide

436 Map Projections

Mollweide Equal Area

The second oldest pseudocylindrical projection that is still in use (after the Sinusoidal) was presented by Carl B. Mollweide (1774-1825) of Halle, Germany, in 1805. It is an equal-area projection of the Earth within an ellipse. It has had a profound effect on world map projections in the 20th century, especially as an inspiration for other important projections, such as the Van der Grinten.

The Mollweide is normally used for world maps and occasionally for a very large region, such as the Pacific Ocean. This is because only two points on the Mollweide are completely free of distortion unless the projection is interrupted. These are the points at latitudes 40°44’12”N and S on the central meridian(s).The world is shown in an ellipse with the Equator, its major axis, twice as long as the central meridian, its minor axis. The meridians 90° east and west of the central meridian form a complete circle. All other meridians are elliptical arcs which, with their opposite numbers on the other side of the central meridian, form complete ellipses that meet at the poles.

Construction Pseudocylinder

Property Equal-area

MeridiansAll of the meridians are ellipses. The central meridian is a straight line, and 90° meridians are circular arcs (Pearson, 1990).

ParallelsThe Equator and parallels are straight lines perpendicular to the central meridian, but they are not equally spaced.

Graticule spacing

Linear graticules include the central meridian and the Equator (Environmental Systems Research Institute, 1992). Meridians are equally spaced along the Equator and along all other parallels. The parallels are straight parallel lines, but they are not equally spaced. The poles are points.

Linear scale

Scale is true along latitudes 40°44’N and S. Distortion increases with distance from these lines and becomes severe at the edges of the projection (Environmental Systems Research Institute, 1992).

UsesOften used for world maps (Pearson, 1990). Suitable for thematic or distribution mapping of the entire world, frequently in interrupted form (Environmental Systems Research Institute, 1992).

Page 467: ERDAS Field Guide

Map Projections 437

PromptsThe following prompts display in the Projection Chooser if Mollweide Equal Area is selected.

Projection Name

Spheroid Type

Datum Name

Page 468: ERDAS Field Guide

438 Map Projections

Rectified Skew Orthomorphic

Martin Hotine (1898 - 1968) called the Oblique Mercator projection the Rectified Skew Orthomorphic projection.

See Oblique Mercator (Hotine) on page 376 for more information.

PromptsThe following prompts display in the Projection Chooser if Rectified Skew Orthomorphic is selected.

Projection Name

Spheroid Type

Datum Name

Page 469: ERDAS Field Guide

Map Projections 439

Robinson Pseudocylindrical

The Robinson Pseudocylindrical projection provides a means of showing the entire Earth in an uninterrupted form. The continents appear as units and are in relatively correct size and location. Poles are represented as lines.

Meridians are equally spaced and resemble elliptical arcs, concave toward the central meridian. The central meridian is a straight line 0.51 times the length of the Equator. Parallels are equally spaced straight lines between 38°N and S, and then the spacing decreases beyond these limits. The poles are 0.53 times the length of the Equator. The projection is based upon tabular coordinates instead of mathematical formulas (Environmental Systems Research Institute, 1992).

PromptsThe following prompts display in the Projection Chooser if Robinson Pseudocylindrical is selected.

Projection Name

Spheroid Type

Datum Name

Construction Pseudocylinder

Property Compromise

Meridians Meridians are elliptical arcs, equally spaced, and concave toward the central meridian.

Parallels Parallels are straight lines.

Graticule spacing

Parallels are straight lines and are parallel. The individual parallels are evenly divided by the meridians (Pearson, 1990).

Linear scaleGenerally, scale is made true along latitudes 38°N and S. Scale is constant along any given latitude, and for the latitude of opposite sign (Environmental Systems Research Institute, 1992).

Uses

Developed for use in general and thematic world maps. Used by Rand McNally since the 1960s and by the National Geographic Society since 1988 for general and thematic world maps (Environmental Systems Research Institute, 1992).

Page 470: ERDAS Field Guide

440 Map Projections

Southern Orientated Gauss Conformal

Southern Orientated Gauss Conformal is another name for the Transverse Mercator projection, after mathematician Friedrich Gauss (1777-1855). It is also called the Gauss-Krüger projection.

See Transverse Mercator on page 412 for more information.

PromptsThe following prompts display in the Projection Chooser if Southern Orientated Gauss Conformal is selected.

Projection Name

Spheroid Type

Datum Name

Page 471: ERDAS Field Guide

Map Projections 441

Swiss Cylindrical The Swiss Cylindrical projection is a cylindrical projection used by the Swiss Landestopographie, which is a form of the Oblique Mercator projection.

For more information, see Oblique Mercator (Hotine) on page 376.

Page 472: ERDAS Field Guide

442 Map Projections

Winkel’s Tripel Winkel’s Tripel was formulated in 1921 by Oswald Winkel of Germany. It is a combined projection that is the arithmetic mean of the Plate Carrée and Aitoff’s projection (Maling, 1992).

PromptsThe following prompts display in the Projection Chooser if Winkel’s Tripel is selected.

Projection Name

Spheroid Type

Datum Name

Figure 131: Winkel’s Tripel Projection

Source: Snyder and Voxland, 1989

Construction Modified azimuthal

Property Neither conformal nor equal-area

MeridiansCentral meridian is straight. Other meridians are curved and are equally spaced along the Equator, and concave toward the central meridian.

ParallelsEquidistant spacing of parallels. Equator and the poles are straight. Other parallels are curved and concave toward the nearest pole.

Graticule spacing

Symmetry is maintained along the central meridian or the Equator.

Linear scale Scale is true along the central meridian and constant along the Equator.

Uses Used for world maps.

Page 473: ERDAS Field Guide

443Mosaic

Mosaic 443

Mosaic

Introduction The mosaic process offers you the capability to stitch images together so one large, cohesive image of an area can be created. Because of the different features of MosaicPro, you can smooth these images before mosaicking them together as well as color balance them, or adjust the histograms of each image in order to present a better large picture. It is necessary for the images to contain map and projection information, but they do not need to be in the same projection or have the same cell sizes. The input images must have the same number of layers.In addition to MosaicPro, Mosaic Express are features designed to make the mosaic process easier for you. The Mosaic Express will take you through the steps of creating a Mosaic project and it is designed to simplify the mosaic process by gathering important information regarding the mosaic project from you and then building the project without a lot of pre-processing by you.MosaicPro still offers you the most options and allows the most input from you. There are a number of features included with MosaicPro to aid you in creating a better mosaicked image from many separate images. In this chapter, the following features will be discussed as part of MosaicPro input image options followed by an overview of Mosaic Epress. In Input Image Mode for MosaicPro:

• Exclude Areas

• Image Dodging

• Illumination Equalization

• Color Balancing

• Histogram Matching

You can choose from the following when using Intersection Mode:

• Set Overlap Function

• Weighted Seamline Generation

• Geometry-based Seamline Generation

• Different options for choosing a seamline source

These options are available as part of the Output Image Mode:

• Output Image Options

Page 474: ERDAS Field Guide

444 Mosaic

• Preview the mosaic

• Run the mosaic process to disk

Input Image Mode

Exclude Areas When you decide to mosaic images together, you will probably want to use Image Dodging, Illumination Equalization, Color Balancing, or Histogram Matching to give the finished mosaicked image a smoother look devoid of bright patches or shadowy areas that can appear on images. Many of the color differentiations are caused by camera angle or cloud cover. Before applying any of those features, you can use the Exclude Areas feature to mark any types of areas you do not want to be taken into account during a Color Balancing, Image Dodging, or Histogram Matching process. Areas like dark water or bright urban areas can be excluded so as not to throw off the process.The Exclude Areas function works on the principal of defining an AOI (area of interest) in a particular image, and excluding that area if you wish. The feature makes it very easy to pinpoint and draw a polygon around specific areas by featuring two viewers, one with a shot of the entire image, and one zoomed to the AOI you have selected with the Link cursor.If you right-click your mouse while your cursor is in the viewer, you will notice several options offered to help you better view your images by fitting the image to the viewer window, changing the Link cursor color, zooming in or out, rotating the image, changing band combinations, and so on. At the bottom of the Set Exclude Areas dialog, there is a tool bar with options for creating a polygon for your AOI, using the Region Growing tool for your AOI, selecting multiple AOIs, displaying AOI styles, and finding and removing similar areas to your chosen AOI.

Image Dodging Use Image Dodging to radiometrically balance images before you mosaic them. Image Dodging uses an algorithm to correct radiometric irregularities (such as hotspots and vignetting) in an image or group of images. Image Dodging and Color Balancing are similar; Color Balancing applies the correction to an image by modeling the shape of the problem (plane, conic, and so forth) while Image Dodging uses grids to localize the problem areas within an image. Image Dodging corrects brightness and contrast imbalances due to several image inconsistencies.

• Dark corners caused by lens vignetting

• Different film types in the group of images

Page 475: ERDAS Field Guide

Mosaic 445

• Different scanners used to collect the images (or different scanner settings)

• Varying sun positions among the images

• Hot spots caused by the sun position in relation to the imaging lens

The following tips show when to use the various parameters to correct specific inconsistencies.Adjust color images only for brightness and contrast

Deselect the Band Independent parameter (Image Dodging dialog). This way you can control the brightness and contrast across all color bands while preserving the individual color ratios of red to green to blue.

Enhance Shadowed AreasTo pull detail out of shadow areas or low-contrast areas, set the Grid Size (Image Dodging dialog) to a larger number so that the process uses many small grids. You can also set the Max. Contrast parameter (Set Dodging Correction Parameters dialog) to a higher number such as 6. Another method would be to set the Max. Grey Shift (Set Dodging Correction Parameters dialog) to a larger number such as 50 (default = 35).

Reduce contrast or increase contrastIf your dodged images have too much contrast or not enough contrast try the following.

• Change the Max. Grey Shift (Set Dodging Correction Parameters dialog). Start at 50 and reduce slightly if you see clipping effects at black or white.

• Change the Max Contrast (Set Dodging Correction Parameters dialog). You may see a major change when you change the Max Grey Shift, and if you need to make a more subtle change, decrease the contrast by 0.1 to 0.3.

Balance groups of images with different inconsistencies.If you have a large mosaic project and the images have different inconsistencies, then group the images with the same inconsistency and dodge that group with the parameters specific to that inconsistency. For example, if you have images from multiple flight lines that have hot spots in different areas and/or of different intensity, run dodging on the images that have similar hotspots.

Coastal Areas

Page 476: ERDAS Field Guide

446 Mosaic

If you have images with dominant water bodies and coastline, use a Grid Size (Image Dodging dialog) of 6 or 8. This smaller grid size works well because of the difference between bright land and dark water. This will reduce vignetting and hotspots without adversely affecting the appearance of the water. Also uncheck Band Independent (Image Dodging dialog) since a large area of dark water can corrupt the color interpretation of the bright land area.

When you bring up the Image Dodging dialog you have several different sections. Options for Current Image, Options for All Images, and Display Setting are all above the viewer area showing the image and a place for previewing the dodged image. If you want to skip dodging for a certain image, you can check the Don’t do dodging on this image box and skip to the next image you want to mosaic.In the area titled Statistics Collection, you can change the Grid Size, Skip Factor X, and Skip Factor Y. If you want a specific number to apply to all of your images, you can click that button so you don’t have to reenter the information with each new image.In Options For All Images, you can first choose whether the image should be dodged by each band or as one. You then decide if you want the dodging performed across all of the images you intend to mosaic or just one image. This is helpful if you have a set of images that all look smooth except for one that may show a shadow or bright spot in it. If you click Edit Correction Settings, you will get a prompt to Compute Settings first. If you want to, go ahead and compute the settings you have stipulated in the dialog. After the settings are computed, you will see a dialog titled Set Dodging Correction Parameters. In this dialog you are able to change and reset the brightness and contrast and the constraints of the image.Use Display Setting to choose either a RGB image or a Single Band image. If using an RGB image, you can change those bands to whatever combination you wish. After you compute the settings a final time, preview the dodged image in the dialog viewer so you will know if you need to do anything further to it before mosaicking.

Color Balancing When you click Use Color Balancing, you are given the option of Automatic Color Balancing. If you choose this option, the method will be chosen for you. If you want to manually choose the surface method and display options, choose Manual Color Manipulation in the Set Color Balancing dialog.

Page 477: ERDAS Field Guide

Mosaic 447

Mosaic Color Balancing gives you several options to balance any color disparities in your images before mosaicking them together into one large image. When you choose to use Color Balancing in the Color Corrections dialog, you will be asked if you want to color balance your images automatically or manually. For more control over how the images are color balanced, you should choose the manual color balancing option. Once you choose this option, you will have access to the Mosaic Color Balancing tool where you can choose different surface methods, display options, and surface settings for color balancing your images.

Surface MethodsWhen choosing a surface method you should concentrate on how the light abnormality in your image is dispersed. Depending on the shape of the bright or shadowed area you want to correct, you should choose one of the following:

• Parabolic -The color difference is elliptical and does not darken at an equal rate on all sides.

• Conic - The color difference will peak in brightness in the center and darken at an equal rate on all sides.

• Linear - The color difference is graduated across the image.

• Exponential - The color difference is very bright in the center and slowly, but not always evenly, darkens on all sides.

It may be necessary to experiment a bit when trying to decide what surface method to use. It can sometimes be particularly difficult to tell the difference right away between parabolic, conic, and exponential. Conic is usually best for hot spots found in aerial photography although linear may be necessary in those situations due to the correction of flight line variations. The linear method is also useful for images with a large fall off in illumination along the look direction, especially with SAR images, and also with off-nadir viewing sensors.In the same area, you will see a check box for Common center for all layers. If you check this option, all layers in the current image will have their center points set to that of the current layer. Whenever the selector is moved, the text box updated, or the reset button clicked, all of the layers will be updated. If you move the center point, and you wish to bring it back to the middle of the image, you can click Reset Center Point in the Surface Method area.

Display SettingThe Display Setting area of the Mosaic Color Balancing tool lets you choose between RGB images and Single Band images. You can also alter which layer in an RGB image is the red, green, or blue.

Page 478: ERDAS Field Guide

448 Mosaic

Surface SettingsWhen you choose a Surface Method, the Surface Settings become the parameters used in that method’s formula. The parameters define the surface, and the surface will then be used to flatten the brightness variation throughout the image. You can change the following Surface Settings:

• Offset

• Scale

• Center X

• Center Y

• Axis Ratio

As you change the settings, you can see the Image Profile graph change as well. If you want to preview the color balanced image before accepting it, you can click Preview at the bottom of the Mosaic Color Balancing tool. This is helpful because you can change any disparities that still exist in the image.

Histogram Matching Histogram Matching is used in other facets of IMAGINE, but it is particularly useful to the mosaicking process. You should use the Histogram Matching option to match data of the same or adjacent scenes that was captured on different days, or data that is slightly different because of sun or atmospheric effects.By choosing Histogram Matching through the Color Corrections dialog in Mosaic Pro, you have the options of choosing the Matching Method, the Histogram Type, and whether or not to use an external reference file. When choosing a Matching Method, decide if you want your images to be matched according to all the other images you want to mosaic or just matched to the overlapping areas between the images. For Histogram Type you can choose to match images band by band or by the intensity (RGB) of the images.If you check Use external reference, you will get the choice of using an image file or parameters as your Histogram Source. If you have an image that contains the characteristics you would like to see in the image you are running through Histogram Matching, then you should use it.

Page 479: ERDAS Field Guide

Mosaic 449

Intersection Mode When you mosaic images, you will have overlapping areas. For those overlapping areas, you can specify a cutline so that the pixels on one side of a particular cutline take the value of one overlapping image, while the pixels on the other side of the cutline take the value of another overlapping image. The cutlines can be generated manually or automatically.When you choose the Set Mode for Intersection button on the MosaicPro toolbar, you have several different options for handling the overlapping of your images. The features for dealing with image overlap include:

• Loading cutlines from a vector file (a shapefile or arc coverage file)

• Editing cutlines as vectors in the viewer

• Automatic clipping, extending, and merging of cutlines that cross multiple image intersections

• Loading images and calibration information from triangulated block files as well as setting the elevation source

• Selecting mosaic output areas with ASCII files containing corner coordinates of sheets that may be rotated. The ASCII import tool is used to try to parse ASCII files that do not conform to a predetermined format.

• Allowing users to save cutlines and intersections to a pair of shapefiles

• Loading clip boundary output regions from AOI or vector files. This boundary applies to all output regions. Pixels outside the clip boundary will be set to the background color.

Set Overlap Function When you are using more than one image, you need to define how they should overlap. Set Overlap Function gives you the options of no cutline existing, and if one does not exist, how to handle the overlap of images as well as if a cutline exists, then what smoothing or feathering options to use concerning the cutline.

No Cutline ExistsWhen no cutline exists between overlapping images, you will need to choose how to handle the overlap. You are given the following choices:

• Overlay

• Average

• Minimum

Page 480: ERDAS Field Guide

450 Mosaic

• Maximum

• Feather

Cutline ExistsWhen a cutline does exist between images, you will need to decide on smoothing and feathering options to cover the overlap area in the vicinity of the cutline. The Smoothing Options area allows you to choose both the Distance and the Smoothing Filter. The Feathering Options given are No Feathering, Feathering, and Feathering by Distance. If you choose Feathering by Distance, you will be able to enter a specific distance.

Automatically Generate Cutlines For Intersection

The current implementation of Automatic Cutline Generation is geometry-based. The method uses the centerlines of the overlapping polygons as cutlines. While this is a very straightforward approach, it is not recommended for images containing buildings, bridges, rivers, and so on because of the possibility the method would make the mosaicked images look obviously inaccurate near the cutline area. For example, if the cutline crosses a bridge, the bridge may look broken at the point where the cutline crosses it.

Weighted Cutline GenerationWhen your overlapping images contain buildings, bridges, rivers, roads, or anything else where it is very important that the cutline not break, you should use the Weighted Cutline Generation option. The Weighted Cutline Generation option generates the most nadir cutline first. The most nadir cutline is divided into sections where a section is a collection of continuous cutline segments shared by the two images. The starting and ending points of these sections are called nodes. Between nodes, a cutline is refined based on a cost function. The point with the smallest cost will be picked as a new cutline vertex.

Cutline Refining ParametersThe Weighted Cutline Generation dialog’s first section is Cutline Refining Parameters. In this section you can choose the Segment Length, which specifies the segment length of the refined cutline. The smaller the segment length, the smaller the search area for the next vertex will be, and the chances are reduced of the cutline cutting through features such as roads or bridges. This is an especially important consideration for dense urban areas with many buildings. Smaller segment lengths will usually slow down the finding of edges, but at least the chances are small of cutting through important features.

Page 481: ERDAS Field Guide

Mosaic 451

The Bounding Width specifies the constraint to the new vertices in the vertical direction of the segment between two nodes. More specifically the distance from a new vertex to the segment between the two nodes must be no bigger than the half of the value specified by the Bounding Width field.

Cost Function Weighting FactorsThe Cost Function used in cutline refinement is a weighted combination of direction, standard deviation, and difference in gray value. The weighting is in favor of high standard deviation, a low difference in gray value, and direction that is closest to the direction between the two nodes. The default value is one for all three weighting factors. When left at the default value, all three components play the same role. When you increment one weighting factor, that factor will play a larger role. If you set a weighting factor to zero, the corresponding component will not play a role at all. If you set all three weighting factors to zero, the cutline refinement will not be done, and the weighted cutline will be reduced to the most nadir cutline.

Geometry-based Cutline Generation

Geometry-based Cutline Generation is a bit more simplistic because it is based only on the geometry of the overlapping region between images. Pixel values of the involved images are not used. For an overlapping region that only involves two images, the geometry-based cutline can be seen as a center line of the overlapping area that cuts the region into two equal halves. One half is closer to the center of the first image, and the other half is closer to the center of the second image. Geometry-based Cutline Generation runs very quickly compared to Weighted Cutline Generation. Geometry-based generation does not have to factor in pixels from the images. Use the geometry-based method when your images contain homogenous areas like grasses or lakes, but use Weighted Cutline Generation for images where the cutline cannot break such as buildings, roads, rivers, and urban areas.

Output Image Mode

After you have chosen images to be mosaicked, gone through any color balancing or histogram matching, or image dodging, and checked overlapping images for possible cutline needs, you are ready to output the images to an actual mosaic file. When you select the Set Mode for Output portion of the MosaicPro, the first feature you will want to use is Output Image Options. After choosing those options, you can preview the mosaic and then run it to disc.

Output Image Options This dialog lets you define your output map areas and change output map projection if you wish. You will be given the choice of using Union of All Inputs, User-defined AOI, Polygon Vector File, Map Series File, USGS Maps Database, or ASCII Sheet File as your defining feature for an output map area. The default is Union of All Inputs.

Page 482: ERDAS Field Guide

452 Mosaic

Different choices yield different options to further modify the output image. For instance, if you select User-defined AOI, then you are given the choice of outputting multiple AOI objects to either multiple files or a single file. If you choose Map Series File, you will be able to enter the filename you want to use and choose whether to treat the map extent as pixel centers or pixel edges.

ASCII Sheet File FormatThere are two file formats for ASCII sheet files. One lists the coordinates on separate lines with hard coded idenitifiers: “UL”, “UR”, “LL”, “LR”. This format can contain multiple sheet files separated by the code “-99”. The sheet name is optional.

some_sheet_name(north-up orthoimage)UL 0 0LR 10 10-99

second_sheet_name(rotated orthoimage)UL 0 0UR 5 5LL 3 3LR 10 10-99

The second format lists the coordinates on a single line with a blank space as a separator. The sheet name is optional.

third_sheet_name 0 0 10 10(north-up orthoimage)

fourth_sheet_name 0 0 3 0 3 3 0 3(rotated orthoimage)

Image Boundaries TypeThere are two types of image boundaries. If you enter two image coordinates the boundary is treated as a North-up orthoimage. If you enter four image coordinates, it is treated as a Rotated orthoimage. Also part of Output Image Options is the choice of choosing a Clip Boundary. If you choose Clip Boundary, any area outside of the Clip Boundary will be designated as backgroung value in your output image. This differs from the User-defined AOI because Clip Boundary applies to all output images. You can also click Change Output Map Projection to bring up the Projection Chooser. The Projection Chooser lets you choose a particular projection to use from categories and projections around the world. If you want to choose a customized map projection, you can do that as well.You are also given the options of changing the Output Cell Size from the default of 8.0, and you can choose a particular Output Data Type from a dropdown list instead of the default Unsigned 8 bit.When you are done selecting Output Image Options, you can preview the mosaicked image before saving it as a file.

Page 483: ERDAS Field Guide

Mosaic 453

Run Mosaic To Disc When you are ready to process the mosaicked image to disc, you can click this icon and open the Output File Name dialog. From this dialog, browse to the directory where you want to store your mosaicked image, and enter the file name for the image. There are several options on the Output Options tab such as Output to a Common Look Up Table, Ignore Input Values, Output Background Value, and Create Output in Batch mode. You can choose from any of these according to your desired outcome.

Page 484: ERDAS Field Guide

454 Mosaic

Page 485: ERDAS Field Guide

455Enhancement

Enhancement 455

Enhancement

Introduction Image enhancement is the process of making an image more interpretable for a particular application (Faust, 1989). Enhancement makes important features of raw, remotely sensed data more interpretable to the human eye. Enhancement techniques are often used instead of classification techniques for feature extraction—studying and locating areas and objects on the ground and deriving useful information from images. The techniques to be used in image enhancement depend upon:

• Your data—the different bands of Landsat, SPOT, and other imaging sensors are selected to detect certain features. You must know the parameters of the bands being used before performing any enhancement. (See "Raster Data" on page 1 for more details.)

• Your objective—for example, sharpening an image to identify features that can be used for training samples requires a different set of enhancement techniques than reducing the number of bands in the study. You must have a clear idea of the final product desired before enhancement is performed.

• Your expectations—what you think you are going to find.

• Your background—your experience in performing enhancement.

This chapter discusses these enhancement techniques available with ERDAS IMAGINE:

• Correcting Data Anomalies - radiometric and geometric correction

• Radiometric Enhancement - enhancing images based on the values of individual pixels

• Spatial Enhancement - enhancing images based on the values of individual and neighboring pixels

• Wavelet Resolution Merge - fusing information from several sensors into one composite image

• Spectral Enhancement - enhancing images by transforming the values of each pixel on a multiband basis

• Hyperspectral image processing - This section is superceeded by the IMAGINE Spectral Analysis™ User’s Guide

Page 486: ERDAS Field Guide

456 Enhancement

• Independent Component Analysis - a high order feature extraction technique that exploits the statistical characteristics of multispectral and hyperspectral imagery

• Fourier Analysis - techniques for eliminating periodic noise in imagery

• Radar Imagery Enhancement - techniques specifically designed for enhancing radar imagery

See "Bibliography" on page 777 to find current literature that provides a more detailed discussion of image processing enhancement techniques.

Display vs. File Enhancement

With ERDAS IMAGINE, image enhancement may be performed:

• temporarily, upon the image that is displayed in the Viewer (by manipulating the function and display memories), or

• permanently, upon the image data in the data file.

Enhancing a displayed image is much faster than enhancing an image on disk. If one is looking for certain visual effects, it may be beneficial to perform some trial and error enhancement techniques on the display. Then, when the desired results are obtained, the values that are stored in the display device memory can be used to make the same changes to the data file.

For more information about displayed images and the memory of the display device, see "Image Display" on page 145.

Spatial Modeling Enhancements

Two types of models for enhancement can be created in ERDAS IMAGINE:

• Graphical models—use Model Maker (Spatial Modeler) to easily, and with great flexibility, construct models that can be used to enhance the data.

• Script models—for even greater flexibility, use the Spatial Modeler Language (SML) to construct models in script form. SML enables you to write scripts which can be written, edited, and run from the Spatial Modeler component or directly from the command line. You can edit models created with Model Maker using SML or Model Maker.

Page 487: ERDAS Field Guide

Enhancement 457

Although a graphical model and a script model look different, they produce the same results when applied.

Image InterpreterERDAS IMAGINE supplies many algorithms constructed as models, which are ready to be applied with user-input parameters at the touch of a button. These graphical models, created with Model Maker, are listed as menu functions in the Image Interpreter. These functions are mentioned throughout this chapter. Just remember, these are modeling functions which can be edited and adapted as needed with Model Maker or the SML.

See "Geographic Information Systems" on page 173 for more information on Raster Modeling.

The modeling functions available for enhancement in Image Interpreter are briefly described in Table 49.

Table 49: Description of Modeling Functions Available for Enhancement

Function Description

SPATIAL ENHANCEMENT

These functions enhance the image using the values of individual and surrounding pixels.

Convolution Uses a matrix to average small sets of pixels across an image.

Non-directional Edge Averages the results from two orthogonal 1st derivative edge detectors.

Focal Analysis Enables you to perform one of several analyses on class values in an image file using a process similar to convolution filtering.

Texture Defines texture as a quantitative characteristic in an image.

Adaptive Filter Varies the contrast stretch for each pixel depending upon the DN values in the surrounding moving window.

Statistical Filter Produces the pixel output DN by averaging pixels within a moving window that fall within a statistically defined range.

Resolution Merge Merges imagery of differing spatial resolutions.

Page 488: ERDAS Field Guide

458 Enhancement

Crisp Sharpens the overall scene luminance without distorting the thematic content of the image.

RADIOMETRIC ENHANCEMENT

These functions enhance the image using the values of individual pixels within each band.

LUT (Lookup Table) Stretch Creates an output image that contains the data values as modified by a lookup table.

Histogram Equalization Redistributes pixel values with a nonlinear contrast stretch so that there are approximately the same number of pixels with each value within a range.

Histogram Match Mathematically determines a lookup table that converts the histogram of one image to resemble the histogram of another.

Brightness Inversion Allows both linear and nonlinear reversal of the image intensity range.

Haze Reduction* Dehazes Landsat 4 and 5 TM data and panchromatic data.

Noise Reduction* Removes noise using an adaptive filter.

Destripe TM Data Removes striping from a raw TM4 or TM5 data file.

SPECTRAL ENHANCEMENT

These functions enhance the image by transforming the values of each pixel on a multiband basis.

Principal Components Compresses redundant data values into fewer bands, which are often more interpretable than the source data.

Inverse Principal Components Performs an inverse principal components analysis.

Decorrelation Stretch Applies a contrast stretch to the principal components of an image.

Tasseled Cap Rotates the data structure axes to optimize data viewing for vegetation studies.

RGB to IHS Transforms red, green, blue values to intensity, hue, saturation values.

Table 49: Description of Modeling Functions Available for Enhancement

Function Description

Page 489: ERDAS Field Guide

Enhancement 459

* Indicates functions that are not graphical models.

NOTE: There are other Image Interpreter functions that do not necessarily apply to image enhancement.

Correcting Data Anomalies

Each generation of sensors shows improved data acquisition and image quality over previous generations. However, some anomalies still exist that are inherent to certain sensors and that can be corrected by applying mathematical formulas derived from the distortions (Lillesand and Kiefer, 1987). In addition, the natural distortion that results from the curvature and rotation of the Earth in relation to the sensor platform produces distortions in the image data, which can also be corrected.

IHS to RGB Transforms intensity, hue, saturation values to red, green, blue values.

Indices Performs band ratios that are commonly used in mineral and vegetation studies.

Natural Color Simulates natural color for TM data.

FOURIER ANALYSISThese functions enhance the image by applying a Fourier Transform to the data.

Fourier Transform* Enables you to utilize a highly efficient version of the Discrete Fourier Transform (DFT).

Fourier Transform Editor* Enables you to edit Fourier images using many interactive tools and filters.

Inverse Fourier Transform* Computes the inverse two-dimensional Fast Fourier Transform (FFT) of the spectrum stored.

Fourier Magnitude* Converts the Fourier Transform image into the more familiar Fourier Magnitude image.

Periodic Noise Removal* Automatically removes striping and other periodic noise from images.

Homomorphic Filter* Enhances imagery using an illumination/reflectance model.

Table 49: Description of Modeling Functions Available for Enhancement

Function Description

Page 490: ERDAS Field Guide

460 Enhancement

Radiometric CorrectionGenerally, there are two types of data correction: radiometric and geometric. Radiometric correction addresses variations in the pixel intensities (DNs) that are not caused by the object or scene being scanned. These variations include:

• differing sensitivities or malfunctioning of the detectors

• topographic effects

• atmospheric effects

Geometric CorrectionGeometric correction addresses errors in the relative positions of pixels. These errors are induced by:

• sensor viewing geometry

• terrain variations

Because of the differences in radiometric and geometric correction between traditional, passively detected visible/infrared imagery and actively acquired radar imagery, the two are discussed separately. See Radar Imagery Enhancement on page 525.

Radiometric Correction: Visible/Infrared Imagery

StripingStriping or banding occurs if a detector goes out of adjustment—that is, it provides readings consistently greater than or less than the other detectors for the same band over the same ground cover. Some Landsat 1, 2, and 3 data have striping every sixth line, because of improper calibration of some of the 24 detectors that were used by the MSS. The stripes are not constant data values, nor is there a constant error factor or bias. The differing response of the errant detector is a complex function of the data value sensed. This problem has been largely eliminated in the newer sensors. Various algorithms have been advanced in current literature to help correct this problem in the older data. Among these algorithms are simple along-line convolution, high-pass filtering, and forward and reverse principal component transformations (Crippen, 1989a).

Page 491: ERDAS Field Guide

Enhancement 461

Data from airborne multispectral or hyperspectral imaging scanners also shows a pronounced striping pattern due to varying offsets in the multielement detectors. This effect can be further exacerbated by unfavorable sun angle. These artifacts can be minimized by correcting each scan line to a scene-derived average (Kruse, 1988).

Use the Image Interpreter or the Spatial Modeler to implement algorithms to eliminate striping.The Spatial Modeler editing capabilities allow you to adapt the algorithms to best address the data. The IMAGINE Radar Interpreter Adjust Brightness function also corrects some of these problems.

Line DropoutAnother common remote sensing device error is line dropout. Line dropout occurs when a detector either completely fails to function, or becomes temporarily saturated during a scan (like the effect of a camera flash on the retina). The result is a line or partial line of data with higher data file values, creating a horizontal streak until the detector(s) recovers, if it recovers. Line dropout is usually corrected by replacing the bad line with a line of estimated data file values, which is based on the lines above and below it.

Atmospheric Effects The effects of the atmosphere upon remotely-sensed data are not considered errors, since they are part of the signal received by the sensing device (Bernstein, 1983). However, it is often important to remove atmospheric effects, especially for scene matching and change detection analysis. Over the past 30 years, a number of algorithms have been developed to correct for variations in atmospheric transmission. Four categories are mentioned here:

• dark pixel subtraction

• radiance to reflectance conversion

• linear regressions

• atmospheric modeling

Use the Spatial Modeler to construct the algorithms for these operations.

Page 492: ERDAS Field Guide

462 Enhancement

Dark Pixel SubtractionThe dark pixel subtraction technique assumes that the pixel of lowest DN in each band should really be zero, and hence its radiometric value (DN) is the result of atmosphere-induced additive errors (Crane, 1971; Chavez et al, 1977). These assumptions are very tenuous and recent work indicates that this method may actually degrade rather than improve the data (Crippen, 1987).

Radiance to Reflectance ConversionRadiance to reflectance conversion requires knowledge of the true ground reflectance of at least two targets in the image. These can come from either at-site reflectance measurements, or they can be taken from a reflectance table for standard materials. The latter approach involves assumptions about the targets in the image.

Linear RegressionsA number of methods using linear regressions have been tried. These techniques use bispectral plots and assume that the position of any pixel along that plot is strictly a result of illumination. The slope then equals the relative reflectivities for the two spectral bands. At an illumination of zero, the regression plots should pass through the bispectral origin. Offsets from this represent the additive extraneous components, due to atmosphere effects (Crippen, 1987).

Atmospheric ModelingAtmospheric modeling is computationally complex and requires either assumptions or inputs concerning the atmosphere at the time of imaging. The atmospheric model used to define the computations is frequently Lowtran or Modtran (Kneizys et al, 1988). This model requires inputs such as atmospheric profile (for example, pressure, temperature, water vapor, ozone), aerosol type, elevation, solar zenith angle, and sensor viewing angle.Accurate atmospheric modeling is essential in preprocessing hyperspectral data sets where bandwidths are typically 10 nm or less. These narrow bandwidth corrections can then be combined to simulate the much wider bandwidths of Landsat or SPOT sensors (Richter, 1990).

Geometric Correction As previously noted, geometric correction is applied to raw sensor data to correct errors of perspective due to the Earth’s curvature and sensor motion. Today, some of these errors are commonly removed at the sensor’s data processing center. In the past, some data from Landsat MSS 1, 2, and 3 were not corrected before distribution. Many visible/infrared sensors are not nadir-viewing: they look to the side. For some applications, such as stereo viewing or DEM generation, this is an advantage. For other applications, it is a complicating factor.

Page 493: ERDAS Field Guide

Enhancement 463

In addition, even a nadir-viewing sensor is viewing only the scene center at true nadir. Other pixels, especially those on the view periphery, are viewed off-nadir. For scenes covering very large geographic areas (such as AVHRR), this can be a significant problem. This and other factors, such as Earth curvature, result in geometric imperfections in the sensor image. Terrain variations have the same distorting effect, but on a smaller (pixel-by-pixel) scale. These factors can be addressed by rectifying the image to a map.

See "Rectification" on page 251 for more information on geometric correction using rectification and "Photogrammetric Concepts" on page 595 for more information on orthocorrection.

A more rigorous geometric correction utilizes a DEM and sensor position information to correct these distortions. This is orthocorrection.

Radiometric Enhancement

Radiometric enhancement deals with the individual values of the pixels in the image. It differs from spatial enhancement (discussed in Spatial Enhancement on page 474), which takes into account the values of neighboring pixels. Depending on the points and the bands in which they appear, radiometric enhancements that are applied to one band may not be appropriate for other bands. Therefore, the radiometric enhancement of a multiband image can usually be considered as a series of independent, single-band enhancements (Faust, 1989). Radiometric enhancement usually does not bring out the contrast of every pixel in an image. Contrast can be lost between some pixels, while gained on others.

Figure 132: Histograms of Radiometrically Enhanced Data

Original Data0 255j k

Freq

uenc

y

(j and k are reference points)

Enhanced Data0 255j k

Freq

uenc

y

Page 494: ERDAS Field Guide

464 Enhancement

In Figure 132, the range between j and k in the histogram of the original data is about one third of the total range of the data. When the same data are radiometrically enhanced, the range between j and k can be widened. Therefore, the pixels between j and k gain contrast—it is easier to distinguish different brightness values in these pixels. However, the pixels outside the range between j and k are more grouped together than in the original histogram to compensate for the stretch between j and k. Contrast among these pixels is lost.

Contrast Stretching When radiometric enhancements are performed on the display device, the transformation of data file values into brightness values is illustrated by the graph of a lookup table. For example, Figure 133 shows the graph of a lookup table that increases the contrast of data file values in the middle range of the input data (the range within the brackets). Note that the input range within the bracket is narrow, but the output brightness values for the same pixels are stretched over a wider range. This process is called contrast stretching.

Figure 133: Graph of a Lookup Table

Notice that the graph line with the steepest (highest) slope brings out the most contrast by stretching output values farther apart.

Linear and Nonlinear The terms linear and nonlinear, when describing types of spectral enhancement, refer to the function that is applied to the data to perform the enhancement. A piecewise linear stretch uses a polyline function to increase contrast to varying degrees over different ranges of the data, as in Figure 134.

input data file values

outp

ut b

right

ness

val

ues

255

25500

Page 495: ERDAS Field Guide

Enhancement 465

Figure 134: Enhancement with Lookup Tables

Linear Contrast Stretch A linear contrast stretch is a simple way to improve the visible contrast of an image. It is often necessary to contrast-stretch raw image data, so that they can be seen on the display. In most raw data, the data file values fall within a narrow range—usually a range much narrower than the display device is capable of displaying. That range can be expanded to utilize the total range of the display device (usually 0 to 255).

A Percentage LUT with clip of 2.5% from left end and 1.0% from right end of the histogram linear contrast stretch is automatically applied to images displayed in the Viewer.

Nonlinear Contrast Stretch A nonlinear spectral enhancement can be used to gradually increase or decrease contrast over a range, instead of applying the same amount of contrast (slope) across the entire image. Usually, nonlinear enhancements bring out the contrast in one range while decreasing the contrast in other ranges. The graph of the function in Figure 135 shows one example.

input data file values

outp

ut b

right

ness

val

ues 255

25500

linear

nonlinear

piecewiselinear

Page 496: ERDAS Field Guide

466 Enhancement

Figure 135: Nonlinear Radiometric Enhancement

Piecewise Linear Contrast StretchA piecewise linear contrast stretch allows for the enhancement of a specific portion of data by dividing the lookup table into three sections: low, middle, and high. It enables you to create a number of straight line segments that can simulate a curve. You can enhance the contrast or brightness of any section in a single color gun at a time. This technique is very useful for enhancing image areas in shadow or other areas of low contrast.

In ERDAS IMAGINE, the Piecewise Linear Contrast function is set up so that there are always pixels in each data file value from 0 to 255. You can manipulate the percentage of pixels in a particular range, but you cannot eliminate a range of data file values.

A piecewise linear contrast stretch normally follows two rules: 1) The data values are continuous; there can be no break in

the values between High, Middle, and Low. Range specifications adjust in relation to any changes to maintain the data value range.

2) The data values specified can go only in an upward, increasing direction, as shown in Figure 136.

input data file values

outp

ut b

right

ness

val

ues 255

25500

Page 497: ERDAS Field Guide

Enhancement 467

Figure 136: Piecewise Linear Contrast Stretch

The contrast value for each range represents the percent of the available output range that particular range occupies. The brightness value for each range represents the middle of the total range of brightness values occupied by that range. Since rules 1 and 2 above are enforced, as the contrast and brightness values are changed, they may affect the contrast and brightness of other ranges. For example, if the contrast of the low range increases, it forces the contrast of the middle to decrease.

Contrast Stretch on the Display Usually, a contrast stretch is performed on the display device only, so that the data file values are not changed. Lookup tables are created that convert the range of data file values to the maximum range of the display device. You can then edit and save the contrast stretch values and lookup tables as part of the raster data image file. These values are loaded into the Viewer as the default display values the next time the image is displayed.

In ERDAS IMAGINE, you can permanently change the data file values to the lookup table values. Use the Image Interpreter LUT Stretch function to create an .img output file with the same data values as the displayed contrast stretched image.

See "Raster Data" on page 1 for more information on the data contained in image files.

0 255

100%

Data Value Range

LUT

Val

ue

Low Middle High

Page 498: ERDAS Field Guide

468 Enhancement

The statistics in the image file contain the mean, standard deviation, and other statistics on each band of data. The mean and standard deviation are used to determine the range of data file values to be translated into brightness values or new data file values. You can specify the number of standard deviations from the mean that are to be used in the contrast stretch. Usually the data file values that are two standard deviations above and below the mean are used. If the data have a normal distribution, then this range represents approximately 95 percent of the data. The mean and standard deviation are used instead of the minimum and maximum data file values because the minimum and maximum data file values are usually not representative of most of the data. A notable exception occurs when the feature being sought is in shadow. The shadow pixels are usually at the low extreme of the data file values, outside the range of two standard deviations from the mean.

The use of these statistics in contrast stretching is discussed and illustrated in "Image Display" on page 145. Statistical terms are discussed in "Math Topics" on page 697.

Varying the Contrast Stretch There are variations of the contrast stretch that can be used to change the contrast of values over a specific range, or by a specific amount. By manipulating the lookup tables as in Figure 137, the maximum contrast in the features of an image can be brought out. Figure 137 shows how the contrast stretch manipulates the histogram of the data, increasing contrast in some areas and decreasing it in others. This is also a good example of a piecewise linear contrast stretch, which is created by adding breakpoints to the histogram.

Page 499: ERDAS Field Guide

Enhancement 469

Figure 137: Contrast Stretch Using Lookup Tables, and Effect on Histogram

Histogram Equalization Histogram equalization is a nonlinear stretch that redistributes pixel values so that there is approximately the same number of pixels with each value within a range. The result approximates a flat histogram. Therefore, contrast is increased at the peaks of the histogram and lessened at the tails.Histogram equalization can also separate pixels into distinct groups if there are few output values over a wide range. This can have the visual effect of a crude classification.

input data file values

outp

ut b

right

ness

val

ues 255

25500

input data file values

outp

ut b

right

ness

val

ues 255

25500

input data file values

outp

ut b

right

ness

val

ues 255

25500

input data file values

outp

ut b

right

ness

val

ues 255

25500

1. Linear stretch. Values are clipped at 255.

2. A breakpoint is added to the linear function, redistributing the contrast.

3. Another breakpoint added. Contrast at the peak of the histogram continues to increase.

4. The breakpoint at the top of the function is moved so that values are not clipped.

inputhistogram

outputhistogram

Page 500: ERDAS Field Guide

470 Enhancement

Figure 138: Histogram Equalization

To perform a histogram equalization, the pixel values of an image (either data file values or brightness values) are reassigned to a certain number of bins, which are simply numbered sets of pixels. The pixels are then given new values, based upon the bins to which they are assigned. The following parameters are entered:

• N - the number of bins to which pixel values can be assigned. If there are many bins or many pixels with the same value(s), some bins may be empty.

• M - the maximum of the range of the output values. The range of the output values is from 0 to M.

The total number of pixels is divided by the number of bins, equaling the number of pixels per bin, as shown in the following equation:

Where:N = the number of bins T = the total number of pixels in the image A = the equalized number of pixels per bin

The pixels of each input value are assigned to bins, so that the number of pixels in each bin is as close to A as possible. Consider Figure 139:

Original Histogram After Equalization

peak

tail

pixels at peak are spreadapart - contrast is gained

pixels attail aregrouped -contrastis lost

A TN----=

Page 501: ERDAS Field Guide

Enhancement 471

Figure 139: Histogram Equalization Example

There are 240 pixels represented by this histogram. To equalize this histogram to 10 bins, there would be:

240 pixels / 10 bins = 24 pixels per bin = A

To assign pixels to bins, the following equation is used:

Where: A = equalized number of pixels per bin (see above) Hi = the number of values with the value i (histogram) int = integer function (truncating real numbers to integer) Bi = bin number for pixels with value i

Source: Modified from Gonzalez and Wintz, 1977The 10 bins are rescaled to the range 0 to M. In this example, M = 9, because the input values ranged from 0 to 9, so that the equalized histogram can be compared to the original. The output histogram of this equalized image looks like Figure 140:

0 1 2 3 4 5 6 7 8 9

5 510

15

60 60

40

30

105

num

ber o

f pix

els

data file values

A = 24

Bi intHk

k 1=

i 1–

∑⎝ ⎠⎜ ⎟⎜ ⎟⎛ ⎞ Hi

2-----+

A----------------------------------

=

Page 502: ERDAS Field Guide

472 Enhancement

Figure 140: Equalized Histogram

Effect on Contrast By comparing the original histogram of the example data with the one above, you can see that the enhanced image gains contrast in the peaks of the original histogram. For example, the input range of 3 to 7 is stretched to the range 1 to 8. However, data values at the tails of the original histogram are grouped together. Input values 0 through 2 all have the output value of 0. So, contrast among the tail pixels, which usually make up the darkest and brightest regions of the input image, is lost. The resulting histogram is not exactly flat, since the pixels can rarely be grouped together into bins with an equal number of pixels. Sets of pixels with the same value are never split up to form equal bins.

Level SliceA level slice is similar to a histogram equalization in that it divides the data into equal amounts. A level slice on a true color display creates a stair-stepped lookup table. The effect on the data is that input file values are grouped together at regular intervals into a discrete number of levels, each with one output brightness value.To perform a true color level slice, you must specify a range for the output brightness values and a number of output levels. The lookup table is then stair-stepped so that there is an equal number of input pixels in each of the output levels.

0 1 2 3 4 5 6 7 8 9

15

60 60

40

30

num

ber o

f pix

els

output data file values

A = 242015

01

2

3

4 5

78

9

6

numbers inside bars are input data file values

0 0 0

Page 503: ERDAS Field Guide

Enhancement 473

Histogram Matching Histogram matching is the process of determining a lookup table that converts the histogram of one image to resemble the histogram of another. Histogram matching is useful for matching data of the same or adjacent scenes that were scanned on separate days, or are slightly different because of sun angle or atmospheric effects. This is especially useful for mosaicking or change detection.To achieve good results with histogram matching, the two input images should have similar characteristics:

• The general shape of the histogram curves should be similar.

• Relative dark and light features in the image should be the same.

• For some applications, the spatial resolution of the data should be the same.

• The relative distributions of land covers should be about the same, even when matching scenes that are not of the same area. If one image has clouds and the other does not, then the clouds should be removed before matching the histograms. This can be done using the AOI function. The AOI function is available from the Viewer menu bar.

In ERDAS IMAGINE, histogram matching is performed band-to-band (for example, band 2 of one image is matched to band 2 of the other image).

To match the histograms, a lookup table is mathematically derived, which serves as a function for converting one histogram to the other, as illustrated in Figure 141.

Figure 141: Histogram Matching

Source histogram (a), mapped through the lookup table (b), approximates model histogram (c).

frequ

ency

input0 255

frequ

ency

input0 255

frequ

ency

input0 255

+ =

(a) (b)

Page 504: ERDAS Field Guide

474 Enhancement

Brightness Inversion The brightness inversion functions produce images that have the opposite contrast of the original image. Dark detail becomes light, and light detail becomes dark. This can also be used to invert a negative image that has been scanned to produce a positive image.Brightness inversion has two options: inverse and reverse. Both options convert the input data range (commonly 0 - 255) to 0 - 1.0. A min-max remapping is used to simultaneously stretch the image and handle any input bit format. The output image is in floating point format, so a min-max stretch is used to convert the output image into 8-bit format. Inverse is useful for emphasizing detail that would otherwise be lost in the darkness of the low DN pixels. This function applies the following algorithm:

DNout = 1.0 if 0.0 < DNin < 0.1

DNout = 0.1 ÷ DNin if 0.1 < DNin < 1

Reverse is a linear function that simply reverses the DN values:

DNout = 1.0 - DNin

Source: Pratt, 1991

Spatial Enhancement

While radiometric enhancements operate on each pixel individually, spatial enhancement modifies pixel values based on the values of surrounding pixels. Spatial enhancement deals largely with spatial frequency, which is the difference between the highest and lowest values of a contiguous set of pixels. Jensen (Jensen, 1986) defines spatial frequency as “the number of changes in brightness value per unit distance for any particular part of an image.” Consider the examples in Figure 142:

• zero spatial frequency—a flat image, in which every pixel has the same value

• low spatial frequency—an image consisting of a smoothly varying gray scale

• highest spatial frequency—an image consisting of a checkerboard of black and white pixels

Page 505: ERDAS Field Guide

Enhancement 475

Figure 142: Spatial Frequencies

This section contains a brief description of the following:

• Convolution, Crisp, and Adaptive filtering

• Resolution merging

See Radar Imagery Enhancement on page 525 for a discussion of Edge Detection and Texture Analysis. These spatial enhancement techniques can be applied to any type of data.

Convolution Filtering Convolution filtering is the process of averaging small sets of pixels across an image. Convolution filtering is used to change the spatial frequency characteristics of an image (Jensen, 1996). A convolution kernel is a matrix of numbers that is used to average the value of each pixel with the values of surrounding pixels in a particular way. The numbers in the matrix serve to weight this average toward particular pixels. These numbers are often called coefficients, because they are used as such in the mathematical equations. In ERDAS IMAGINE, you can apply convolution filtering to an image using any of these methods:

• Filtering dialog in the respective multispectral or panchromatic image type option

• Convolution function in Spatial Resolution option

• Spatial Resolution Non-directional Edge enhancement function

• Convolve function in Model Maker

Filtering is a broad term, which refers to the altering of spatial or spectral features for image enhancement (Jensen, 1996). Convolution filtering is one method of spatial filtering. Some texts may use the terms synonymously.

zero spatial frequency low spatial frequency high spatial frequency

Page 506: ERDAS Field Guide

476 Enhancement

Convolution Example To understand how one pixel is convolved, imagine that the convolution kernel is overlaid on the data file values of the image (in one band), so that the pixel to be convolved is in the center of the window.

Figure 143: Applying a Convolution Kernel

Figure 143 shows a 3 × 3 convolution kernel being applied to the pixel in the third column, third row of the sample data (the pixel that corresponds to the center of the kernel).To compute the output value for this pixel, each value in the convolution kernel is multiplied by the image pixel value that corresponds to it. These products are summed, and the total is divided by the sum of the values in the kernel, as shown here:

integer [(-1 × 8) + (-1 × 6) + (-1 × 6) + (-1 × 2) + (16 × 8) + (-1 × 6) + (-1 × 2) + (-1 × 2) + (-1 × 8) ÷ (-1 + -1 + -1 + -1 + 16 + -1 + -1 + -1 + -1)] = int [(128-40) / (16-8)] = int (88 / 8) = int (11) = 11

In order to convolve the pixels at the edges of an image, pseudo data must be generated in order to provide values on which the kernel can operate. In the example below, the pseudo data are derived by reflection. This means the top row is duplicated above the first data row and the left column is duplicated left of the first data column. If a second row or column is needed (for a 5 × 5 kernel for example), the second data row or column is copied above or left of the first copy and so on. An alternative to reflection is to create background value (usually zero) pseudo data; this is called Fill. When the pixels in this example image are convolved, output values cannot be calculated for the last row and column; here we have used ?s to show the unknown values. In practice, the last row and column of an image are either reflected or filled just like the first row and column.

-1 -1 -1

-1 16 -1

-1 -1 -1

2 8 6 6 6

2 8 6 6 6

2 2 8 6 6

2 2 2 8 6

2 2 2 2 8

Input Data

Kernel

Page 507: ERDAS Field Guide

Enhancement 477

Figure 144: Output Values for Convolution Kernel

The kernel used in this example is a high frequency kernel, as explained below. It is important to note that the relatively lower values become lower, and the higher values become higher, thus increasing the spatial frequency of the image.

Convolution FormulaThe following formula is used to derive an output data file value for the pixel being convolved (in the center):

Where:fij = the coefficient of a convolution kernel at position i,j (in the

kernel) dij = the data value of the pixel that corresponds to fij q = the dimension of the kernel, assuming a square kernel (if

q = 3, the kernel is 3 × 3) F = either the sum of the coefficients of the kernel, or 1 if the

sum of coefficients is 0V = the output pixel value

In cases where V is less than 0, V is clipped to 0. Source: Modified from Jensen, 1996; Schowengerdt, 1983The sum of the coefficients (F) is used as the denominator of the equation above, so that the output values are in relatively the same range as the input values. Since F cannot equal zero (division by zero is not defined), F is set to 1 if the sum is zero.

0 11 5 6 ?

1 11 5 5 ?

1 0 11 6 ?

2 1 0 11 ?

? ? ? ? ?

Output DataInput Data

2 2 8 6 6 6

2 2 8 6 6 6

2 2 8 6 6 6

2 2 2 8 6 6

2 2 2 2 8 6

2 2 2 2 2 8

pseudo data (shaded)

Vfijdij

j 1=

q

∑⎝ ⎠⎜ ⎟⎜ ⎟⎛ ⎞

i 1=

q

F-----------------------------------

=

Page 508: ERDAS Field Guide

478 Enhancement

Zero-Sum Kernels Zero-sum kernels are kernels in which the sum of all coefficients in the kernel equals zero. When a zero-sum kernel is used, then the sum of the coefficients is not used in the convolution equation, as above. In this case, no division is performed (F = 1), since division by zero is not defined. This generally causes the output values to be:

• zero in areas where all input values are equal (no edges)

• low in areas of low spatial frequency

• extreme in areas of high spatial frequency (high values become much higher, low values become much lower)

Therefore, a zero-sum kernel is an edge detector, which usually smooths out or zeros out areas of low spatial frequency and creates a sharp contrast where spatial frequency is high, which is at the edges between homogeneous (homogeneity is low spatial frequency) groups of pixels. The resulting image often consists of only edges and zeros. Zero-sum kernels can be biased to detect edges in a particular direction. For example, this 3 × 3 kernel is biased to the south (Jensen, 1996).

See the section on Edge Detection on page 532 for more detailed information.

High-Frequency Kernels A high-frequency kernel, or high-pass kernel, has the effect of increasing spatial frequency. High-frequency kernels serve as edge enhancers, since they bring out the edges between homogeneous groups of pixels. Unlike edge detectors (such as zero-sum kernels), they highlight edges and do not necessarily eliminate other features.

-1 -1 -1

1 -2 1

1 1 1

-1 -1 -1

-1 16 -1

-1 -1 -1

Page 509: ERDAS Field Guide

Enhancement 479

When this kernel is used on a set of pixels in which a relatively low value is surrounded by higher values, like this...

...the low value gets lower. Inversely, when the kernel is used on a set of pixels in which a relatively high value is surrounded by lower values...

...the high value becomes higher. In either case, spatial frequency is increased by this kernel.

Low-Frequency KernelsBelow is an example of a low-frequency kernel, or low-pass kernel, which decreases spatial frequency.

This kernel simply averages the values of the pixels, causing them to be more homogeneous. The resulting image looks either more smooth or more blurred.

For information on applying filters to thematic layers, see "Geographic Information Systems" on page 173.

Crisp The Crisp filter sharpens the overall scene luminance without distorting the interband variance content of the image. This is a useful enhancement if the image is blurred due to atmospheric haze, rapid sensor motion, or a broad point spread function of the sensor. The algorithm used for this function is:

1) Calculate principal components of multiband input image.2) Convolve PC-1 with summary filter.

BEFORE AFTER

204 200 197 204 200 197

201 106 209 201 9 209

198 200 210 198 200 210

BEFORE AFTER

64 60 57 64 60 57

61 125 69 61 187 69

58 60 70 58 60 70

1 1 1

1 1 1

1 1 1

Page 510: ERDAS Field Guide

480 Enhancement

3) Retransform to RGB space.The logic of the algorithm is that the first principal component (PC-1) of an image is assumed to contain the overall scene luminance. The other PCs represent intra-scene variance. Thus, you can sharpen only PC-1 and then reverse the principal components calculation to reconstruct the original image. Luminance is sharpened, but variance is retained.

Resolution Merge The resolution of a specific sensor can refer to radiometric, spatial, spectral, or temporal resolution.

See "Raster Data" on page 1 for a full description of resolution types.

Landsat TM sensors have seven bands with a spatial resolution of 28.5 m. SPOT panchromatic has one broad band with very good spatial resolution—10 m. Combining these two images to yield a seven-band data set with 10 m resolution provides the best characteristics of both sensors.A number of models have been suggested to achieve this image merge. Welch and Ehlers (Welch and Ehlers, 1987) used forward-reverse RGB to IHS transforms, replacing I (from transformed TM data) with the SPOT panchromatic image. However, this technique is limited to three bands (R, G, B). Chavez (Chavez et al, 1991), among others, uses the forward-reverse principal components transforms with the SPOT image, replacing PC-1.In the above two techniques, it is assumed that the intensity component (PC-1 or I) is spectrally equivalent to the SPOT panchromatic image, and that all the spectral information is contained in the other PCs or in H and S. Since SPOT data do not cover the full spectral range that TM data do, this assumption does not strictly hold. It is unacceptable to resample the thermal band (TM6) based on the visible (SPOT panchromatic) image.Another technique (Schowengerdt, 1980) combines a high frequency image derived from the high spatial resolution data (that is, SPOT panchromatic) additively with the high spectral resolution Landsat TM image. The Resolution Merge function has two different options for resampling low spatial resolution data to a higher spatial resolution while retaining spectral information:

• forward-reverse principal components transform

• multiplicative

Page 511: ERDAS Field Guide

Enhancement 481

Principal Components MergeBecause a major goal of this merge is to retain the spectral information of the six TM bands 1 - 5, 7), this algorithm is mathematically rigorous. It is assumed that:

• PC-1 contains only overall scene luminance; all interband variation is contained in the other 5 PCs, and

• scene luminance in the SWIR bands is identical to visible scene luminance.

With the above assumptions, the forward transform into PCs is made. PC-1 is removed and its numerical range (min to max) is determined. The high spatial resolution image is then remapped so that its histogram shape is kept constant, but it is in the same numerical range as PC-1. It is then substituted for PC-1 and the reverse transform is applied. This remapping is done so that the mathematics of the reverse transform do not distort the thematic information (Welch and Ehlers, 1987).

MultiplicativeThe second technique in the Image Interpreter uses a simple multiplicative algorithm:

(DNTM1) (DNSPOT) = DNnew TM1

The algorithm is derived from the four component technique of Crippen (Crippen, 1989a). In this paper, it is argued that of the four possible arithmetic methods to incorporate an intensity image into a chromatic image (addition, subtraction, division, and multiplication), only multiplication is unlikely to distort the color.However, in his study Crippen first removed the intensity component via band ratios, spectral indices, or PC transform. The algorithm shown above operates on the original image. The result is an increased presence of the intensity component. For many applications, this is desirable. People involved in urban or suburban studies, city planning, and utilities routing often want roads and cultural features (which tend toward high reflection) to be pronounced in the image.

Brovey TransformIn the Brovey Transform method, three bands are used according to the following formula:

[DNB1 / DNB1 + DNB2 + DNB3] × [DNhigh res. image] = DNB1_new

[DNB2 / DNB1 + DNB2 + DNB3] × [DNhigh res. image] = DNB2_new

[DNB3 / DNB1 + DNB2 + DNB3] × [DNhigh res. image] = DNB3_new

Where:B(n) = band (number)

Page 512: ERDAS Field Guide

482 Enhancement

The Brovey Transform was developed to visually increase contrast in the low and high ends of an image’s histogram (that is, to provide contrast in shadows, water and high reflectance areas such as urban features). Consequently, the Brovey Transform should not be used if preserving the original scene radiometry is important. However, it is good for producing RGB images with a higher degree of contrast in the low and high ends of the image histogram and for producing visually appealing images.Since the Brovey Transform is intended to produce RGB images, only three bands at a time should be merged from the input multispectral scene, such as bands 3, 2, 1 from a SPOT or Landsat TM image or 4, 3, 2 from a Landsat TM image. The resulting merged image should then be displayed with bands 1, 2, 3 to RGB.

Adaptive Filter Contrast enhancement (image stretching) is a widely applicable standard image processing technique. However, even adjustable stretches like the piecewise linear stretch act on the scene globally. There are many circumstances where this is not the optimum approach. For example, coastal studies where much of the water detail is spread through a very low DN range and the land detail is spread through a much higher DN range would be such a circumstance. In these cases, a filter that adapts the stretch to the region of interest (the area within the moving window) would produce a better enhancement. Adaptive filters attempt to achieve this (Fahnestock and Schowengerdt, 1983; Peli and Lim, 1982; Schwartz and Soha, 1977).

ERDAS IMAGINE supplies two adaptive filters with user-adjustable parameters. The Adaptive Filter function in Image Interpreter can be applied to undegraded images, such as SPOT, Landsat, and digitized photographs. The Image Enhancement function in IMAGINE Radar Interpreter is better for degraded or difficult images.

Scenes to be adaptively filtered can be divided into three broad and overlapping categories:

• Undegraded—these scenes have good and uniform illumination overall. Given a choice, these are the scenes one would prefer to obtain from imagery sources such as Space Imaging or SPOT.

• Low luminance—these scenes have an overall or regional less than optimum intensity. An underexposed photograph (scanned) or shadowed areas would be in this category. These scenes need an increase in both contrast and overall scene luminance.

Page 513: ERDAS Field Guide

Enhancement 483

• High luminance—these scenes are characterized by overall excessively high DN values. Examples of such circumstances would be an over-exposed (scanned) photograph or a scene with a light cloud cover or haze. These scenes need a decrease in luminance and an increase in contrast.

No single filter with fixed parameters can address this wide variety of conditions. In addition, multiband images may require different parameters for each band. Without the use of adaptive filters, the different bands would have to be separated into one-band files, enhanced, and then recombined.For this function, the image is separated into high and low frequency component images. The low frequency image is considered to be overall scene luminance. These two component parts are then recombined in various relative amounts using multipliers derived from LUTs. These LUTs are driven by the overall scene luminance:

DNout = K(DNHi) + DNLL

Where:K = user-selected contrast multiplierHi = high luminance (derives from the LUT)LL = local luminance (derives from the LUT)

Figure 145: Local Luminance Intercept

Figure 145 shows the local luminance intercept, which is the output luminance value that an input luminance value of 0 would be assigned.

Wavelet Resolution Merge

The ERDAS IMAGINE Wavelet Resolution Merge allows multispectral images of relatively low spatial resolution to be sharpened using a co-registered panchromatic image of relatively higher resolution. A primary intended target dataset is Landsat 7 ETM+. Increasing the spatial resolution of multispectral imagery in this fashion is, in fact, the rationale behind the Landsat 7 sensor design.

0 255Low Frequency Image DN

Loca

l Lum

inan

ce

255

Intercept (I)

Page 514: ERDAS Field Guide

484 Enhancement

The ERDAS IMAGINE algorithm is a modification of the work of King and Wang (King et al, 2001) with extensive input from Lemeshewsky (Lemeshewsky, 1999, Lemeshewsky, 2002a, Lemeshewsky, 2002b). Aside from traditional Pan-Multispectral image sharpening, this algorithm can be used to merge any two images, for example, radar with SPOT Pan. Fusing information from several sensors into one composite image can take place on four levels; signal, pixel, feature, and symbolic. This algorithm works at the pixel level. The results of pixel-level fusion are primarily for presentation to a human observer/analyst (Rockinger and Fechner, 1998). However, in the case of pan/multispectral image sharpening, it must be considered that computer-based analysis (for example, supervised classification) could be a logical follow-on. Thus, it is vital that the algorithm preserve the spectral fidelity of the input dataset.

Wavelet Theory Wavelet-based image reduction is similar to Fourier transform analysis. In the Fourier transform, long continuous (sine and cosine) waves are used as the basis. The wavelet transform uses short, discrete “wavelets” instead of a long wave. Thus the new transform is much more local (Strang et al, 1997). In image processing terms, the wavelet can be parameterized as a finite size moving window.A key element of using wavelets is selection of the base waveform to be used; the “mother wavelet” or “basis”. The “basis” is the basic waveform to be used to represent the image. The input signal (image) is broken down into successively smaller multiples of this basis.Wavelets are derived waveforms that have a lot of mathematically useful characteristics that make them preferable to simple sine or cosine functions. For example, wavelets are discrete; that is, they have a finite length as opposed to sine waves which are continuous and infinite in length. Once the basis waveform is mathematically defined, a family of multiples can be created with incrementally increasing frequency. For example, related wavelets can be created of twice the frequency, three times the frequency, four times the frequency, and so forth. Once the waveform family is defined, the image can be decomposed by applying coefficients to each of the waveforms. Given a sufficient number of waveforms in the family, all the detail in the image can be defined by coefficient multiples of the ever-finer waveforms.In practice, the coefficients of the discrete high-pass filter are of more interest than the wavelets themselves. The wavelets are rarely even calculated (Shensa, 1992). In image processing, we do not want to get deeply involved in mathematical waveform decomposition; we want relatively rapid processing kernels (moving windows). Thus, we use the above theory to derive moving window, high-pass kernels which approximate the waveform decomposition.

Page 515: ERDAS Field Guide

Enhancement 485

For image processing, orthogonal and biorthogonal transforms are of interest. With orthogonal transforms, the new axes are mutually perpendicular and the output signal has the same length as the input signal. The matrices are unitary and the transform is lossless. The same filters are used for analysis and reconstruction.In general, biorthogonal (and symmetrical) wavelets are more appropriate than orthogonal wavelets for image processing applications (Strang et al, 1997, p. 362-363). Biorthogonal wavelets are ideal for image processing applications because of their symmetry and perfect reconstruction properties. Each biorthogonal wavelet has a reconstruction order and a decomposition order associated with it. For example, biorthogonal 3.3 denotes a biorthogonal wavelet with reconstruction order 3 and decomposition order 3. For biorthogonal transforms, the lengths of and angles between the new axes may change. The new axes are not necessarily perpendicular. The analysis and reconstruction filters are not required to be the same. They are, however, mathematically constrained so that no information is lost, perfect reconstruction is possible and the matrices are invertible.The signal processing properties of the Discrete Wavelet Transform (DWT) are strongly determined by the choice of high-pass (bandpass) filter (Shensa, 1992). Although biorthogonal wavelets are phase linear, they are shift variant due to the decimation process, which saves only even-numbered averages and differences. This means that the resultant subimage changes if the starting point is shifted (translated) by one pixel. For the commonly used, fast (Mallat, 1989) discrete wavelet decomposition algorithm, a shift of the input image can produce large changes in the values of the wavelet decomposition coefficients. One way to overcome this is to use an average of each average and difference pair.Once selected, the wavelets are applied to the input image recursively via a pyramid algorithm or filter bank. This is commonly implemented as a cascading series of highpass and lowpass filters, based on the mother wavelet, applied sequentially to the low-pass image of the previous recursion. After filtering at any level, the low-pass image (commonly termed the “approximation” image) is passed to the next finer filtering in the filter bank. The high-pass images (termed “horizontal”, “vertical”, and “diagonal”) are retained for later image reconstruction. In practice, three or four recursions are sufficient.

2-D Discrete Wavelet TransformA 2-D Discrete Wavelet Transform of an image yields four components:

• approximation coefficients

• horizontal coefficients – variations along the columns

WψH

Page 516: ERDAS Field Guide

486 Enhancement

• vertical coefficients – variations along the rows

• diagonal coefficients – variations along the diagonals (Gonzalez and Woods, 2001)

Figure 146: Schematic Diagram of the Discrete Wavelet Transform - DWT

Symbols and are, respectively, the low-pass and high-pass wavelet filters used for decomposition. The rows of the image are convolved with the low-pass and high-pass filters and the result is downsampled along the columns. This yields two subimages whose horizontal resolutions are reduced by a factor of 2. The high-pass or detailed coefficients characterize the image’s high frequency information with vertical orientation while the low-pass component contains its low frequency, vertical information. Both subimages are again filtered columnwise with the same low-pass and high-pass filters and downsampled along rows.Thus, for each input image, we have four subimages each reduced by a factor of 4 compared to the original image; , , , and .

2-D Inverse Discrete Wavelet TransformThe reduced components of the input images are passed as input to the

low-pass and high-pass reconstruction filters and (different from the ones used for decomposition) as shown in Figure 147.

WψV

WψD

lowpass

highpass column

decimationrowdecimation

lowpass

lowpass

highpass

highpass

WψD

WψH

WψV

sub-image

sub-image

inputimage

hϕ hψ

Wϕ WψH Wψ

V WψD

hϕ hψ

Page 517: ERDAS Field Guide

Enhancement 487

Figure 147: Inverse Discrete Wavelet Transform - DWT-1

The sequence of steps is the opposite of that in the DWT, the subimages are upsampled along rows (since the last step in the DWT was downsampling along rows) and convolved with the low-pass and high-pass filters columnwise (in the DWT we filtered along the columns last). These intermediate outputs are concatenated, upsampled along columns and then filtered rowwise and finally concatenated to yield the original image.

Algorithm Theory The basic theory of the decomposition is that an image can be separated into high-frequency and low-frequency components. For example, a low-pass filter can be used to create a low-frequency image. Subtracting this low-frequency image from the original image would create the corresponding high-frequency image. These two images contain all of the information in the original image. If they were added together the result would be the original image. The same could be done by high-pass filter filtering an image and the corresponding low-frequency image could be derived. Again, adding the two together would yield the original image. Any image can be broken into various high- and low-frequency components using various high- and low-pass filters. The wavelet family can be thought of as a high-pass filter. Thus wavelet-based high- and low-frequency images can be created from any input image. By definition, the low-frequency image is of lower resolution and the high-frequency image contains the detail of the image. This process can be repeated recursively. The created low-frequency image could be again processed with the kernels to create new images with even lower resolution. Thus, starting with a 5-meter image, a 10-meter low-pass image and the corresponding high-pass image could be created. A second iteration would create a 20-meter low- and, corresponding, high-pass images. A third recursion would create a 40-meter low- and, corresponding, high-frequency images, and so forth.

rowpadding

lowpass

lowpass

highpass

highpass

WψD

WψH

WψV

outputimage

columnpadding

lowpass

highpass

Page 518: ERDAS Field Guide

488 Enhancement

Consider two images taken on the same day of the same area: one a 5-meter panchromatic, the other 40-meter multispectral. The 5-meter has better spatial resolution, but the 40-meter has better spectral resolution. It would be desirable to take the high-pass information from the 5-meter image and combine it with the 40-meter multispectral image yielding a 5-meter multispectral image. Using wavelets, one can decompose the 5-meter image through several iterations until a 40-meter low-pass image is generated plus all the corresponding high-pass images derived during the recursive decomposition. This 40-meter low-pass image, derived from the original 5-meter pan image, can be replaced with the 40-meter multispectral image and the whole wavelet decomposition process reversed, using the high-pass images derived during the decomposition, to reconstruct a 5-meter resolution multispectral image. The approximation component of the high spectral resolution image and the horizontal, vertical, and diagonal components of the high spatial resolution image are fused into a new output image.If all of the above calculations are done in a mathematically rigorously way (histomatch and resample before substitution, and so forth) one can derive a multispectral image that has the high-pass (high-frequency) details from the 5-meter image.In the above scenario, it should be noted that the high-resolution image (panchromatic, perhaps) is a single band and so the substitution image, from the multispectral image, must also be a single band. There are tools available to compress the multispectral image into a single band for substitution using the IHS transform or PC transform. Alternately, single bands can be processed sequentially.

Figure 148: Wavelet Resolution Merge

Resample

DWT

DWT -1

high spectral res

high spatial res

fused image

h

v

d

h

v

d

a

Histogram Match

Page 519: ERDAS Field Guide

Enhancement 489

Prerequisites and Limitations

Precise CoregistrationA first prerequisite is that the two images be precisely co-registered. For some sensors (for example, Landsat 7 ETM+) this co-registration is inherent in the dataset. If this is not the case, a greatly over-defined 2nd order polynomial transform should be used to coregister one image to the other. By over-defining the transform (that is, by having far more than the minimum number of tie points), it is possible to reduce the random RMS error to the subpixel level. This is easily accomplished by using the Point Prediction option in the GCP Tool. In practice, well-distributed tie points are collected until the predicted point consistently falls exactly were it should. At that time, the transform must be correct. This may require 30-60 tie points for a typical Landsat TM—SPOT Pan co-registration.When doing the coregistration, it is generally preferable to register the lower resolution image to the higher resolution image, that is, the high resolution image is used as the Reference Image. This will allow the greatest accuracy of registration. However, if the lowest resolution image has georeferencing that is to be retained, it may be desirable to use it as the Reference Image. A larger number of tie points and more attention to precise work would then be required to attain the same registration accuracy. Evaluation of the X- and Y-Residual and the RMS Error columns in the ERDAS IMAGINE GCP Tool will indicate the accuracy of registration.It is preferable to store the high and low resolution images as separate image files rather than Layerstacking them into a single image file. In ERDAS IMAGINE, stacked image layers are resampled to a common pixel size. Since the Wavelet Resolution Merge algorithm does the pixel resampling at an optimal stage in the calculation, this avoids multiple resamplings.After creating the coregistered images, they should be codisplayed in an ERDAS IMAGINE Viewer. Then the Fade, Flicker, and Swipe Tools can be used to visually evaluate the precision of the coregistration.

Identical Spectral RangeSecondly, an underlying assumption of resolution merge algorithms is that the two images are spectrally identical. Thus, while a SPOT Panchromatic image can be used to sharpen TM bands 1-4, it would be questionable to use it for TM bands 5 and 7 and totally inappropriate for TM band 6 (thermal emission). If the datasets are not spectrally identical, the spectral fidelity of the MS dataset will be lost.It has been noted (Lemeshewsky, 2002b) that there can be spectrally-induced contrast reversals between visible and NIR bands at, for example, soil-vegetation boundaries. This can produce degraded edge definition or artifacts.

Page 520: ERDAS Field Guide

490 Enhancement

Temporal ConsiderationsA trivial corollary is that the two images must have no temporally-induced differences. If a crop has been harvested, trees have dropped their foliage, lakes have grown or shrunk, and so forth, then merging of the two images in that area is inappropriate. If the areas of change are small, the merge can proceed and those areas removed from evaluation. If, however, the areas of change are large, the histogram matching step may introduce data distortions.

Theoretical LimitationsAs described in the discussion of the discrete wavelet transform, the algorithm downsamples the high spatial resolution input image by a factor of two with each iteration. This produces approximation (a) images with pixel sizes reduced by a factor of two with each iteration. The low (spatial) resolution image will substitute exactly for the “a” image only if the input images have relative pixel sizes differing by a multiple of 2. Any other pixel size ratio will require resampling of the low (spatial) resolution image prior to substitution. Certain ratios can result in a degradation of the substitution image that may not be fully overcome by the subsequent wavelet sharpening. This will result in a less than optimal enhancement. For the most common scenarios, Landsat ETM+, IKONOS and QuickBird, this is not a problem.Although the mathematics of the algorithm are precise for any pixel size ratio, a resolution increase of greater than two or three becomes theoretically questionable. For example, all images are degraded due to atmospheric refraction and scattering of the returning signal. This is termed “point spread”. Thus, both images in a resolution merge operation have, to some (unknown) extent, been “smeared”. Thus, both images in a resolution merge operation have, to an unknown extent, already been degraded. It is not reasonable to assume that each multispectral pixel can be precisely devolved into nine or more subpixels.

Spectral Transform Three merge scenarios are possible. The simplest is when the input low (spatial) resolution image is only one band; a single band of a multispectral image, for example. In this case, the only option is to select which band to use. If the low resolution image to be processed is a multispectral image, two methods will be offered for creating the grayscale representation of the multispectral image intensity; IHS and PC. The IHS method accepts only 3 input bands. It has been suggested that this technique produces an output image that is the best for visual interpretation. Thus, this technique would be appropriate when producing a final output product for map production. Since a visual product is likely to be only an R, G, B image, the 3-band limitation on this method is not a distinct limitation. Clearly, if one wished to sharpen more data layers, the bands could be done as separate groups of 3 and then the whole dataset layerstacked back together.

Page 521: ERDAS Field Guide

Enhancement 491

Lemeshewsky (Lemeshewsky, 2002b) discusses some theoretical limitations on IHS sharpening that suggest that sharpening of the bands individually (as discussed above) may be preferable. Yocky (Yocky, 1995) demonstrates that the IHS transform can distort colors, particularly red, and discusses theoretical explanations.The PC Method will accept any number of input data layers. It has been suggested (Lemeshewsky, 2002a) that this technique produces an output image that better preserves the spectral integrity of the input dataset. Thus, this method would be most appropriate if further processing of the data is intended; for example, if the next step was a classification operation. Note, however, that Zhang (Zhang, 1999) has found equivocal results with the PC versus IHS approaches.The wavelet, IHS, and PC calculations produce single precision floating point output. Consequently, the resultant image must undergo a data compression to get it back to 8 bit format.

Spectral Enhancement

The enhancement techniques that follow require more than one band of data. They can be used to:

• compress bands of data that are similar

• extract new bands of data that are more interpretable to the eye

• apply mathematical transforms and algorithms

• display a wider variety of information in the three available color guns (R, G, B)

In this documentation, some examples are illustrated with two-dimensional graphs. However, you are not limited to two-dimensional (two-band) data. ERDAS IMAGINE programs allow an unlimited number of bands to be used. Keep in mind that processing such data sets can require a large amount of computer swap space. In practice, the principles outlined below apply to any number of bands.

Some of these enhancements can be used to prepare data for classification. However, this is a risky practice unless you are very familiar with your data and the changes that you are making to it. Anytime you alter values, you risk losing some information.

Page 522: ERDAS Field Guide

492 Enhancement

Principal Components Analysis

Principal components analysis (PCA) is often used as a method of data compression. It allows redundant data to be compacted into fewer bands—that is, the dimensionality of the data is reduced. The bands of PCA data are noncorrelated and independent, and are often more interpretable than the source data (Jensen, 1996; Faust, 1989). The process is easily explained graphically with an example of data in two bands. Below is an example of a two-band scatterplot, which shows the relationships of data file values in two bands. The values of one band are plotted against those of the other. If both bands have normal distributions, an ellipse shape results.

Scatterplots and normal distributions are discussed in "Math Topics" on page 697.

Figure 149: Two Band Scatterplot

Ellipse Diagram In an n-dimensional histogram, an ellipse (2 dimensions), ellipsoid (3 dimensions), or hyperellipsoid (more than 3 dimensions) is formed if the distributions of each input band are normal or near normal. (The term ellipse is used for general purposes here.) To perform PCA, the axes of the spectral space are rotated, changing the coordinates of each pixel in spectral space, as well as the data file values. The new axes are parallel to the axes of the ellipse.

First Principal Component The length and direction of the widest transect of the ellipse are calculated using matrix algebra in a process explained below. The transect, which corresponds to the major (longest) axis of the ellipse, is called the first principal component of the data. The direction of the first principal component is the first eigenvector, and its length is the first eigenvalue (Taylor, 1977).

Band Adata file values

Ban

d B

data

file

val

ues

histogramBand A

hist

ogra

mB

and

B

0 255

255

0

Page 523: ERDAS Field Guide

Enhancement 493

A new axis of the spectral space is defined by this first principal component. The points in the scatterplot are now given new coordinates, which correspond to this new axis. Since, in spectral space, the coordinates of the points are the data file values, new data file values are derived from this process. These values are stored in the first principal component band of a new data file.

Figure 150: First Principal Component

The first principal component shows the direction and length of the widest transect of the ellipse. Therefore, as an axis in spectral space, it measures the highest variation within the data. In Figure 151 it is easy to see that the first eigenvalue is always greater than the ranges of the input bands, just as the hypotenuse of a right triangle must always be longer than the legs.

Figure 151: Range of First Principal Component

0 255

255

0

Principal Component(new axis)

Band Adata file values

Ban

d B

data

file

val

ues

0 255

255

0range of Band A

rang

e of

Ban

d B

range of pc 1

Page 524: ERDAS Field Guide

494 Enhancement

Successive Principal Components The second principal component is the widest transect of the ellipse that is orthogonal (perpendicular) to the first principal component. As such, the second principal component describes the largest amount of variance in the data that is not already described by the first principal component (Taylor, 1977). In a two-dimensional analysis, the second principal component corresponds to the minor axis of the ellipse.

Figure 152: Second Principal Component

In n dimensions, there are n principal components. Each successive principal component:

• is the widest transect of the ellipse that is orthogonal to the previous components in the n-dimensional space of the scatterplot (Faust, 1989), and

• accounts for a decreasing amount of the variation in the data which is not already accounted for by previous principal components (Taylor, 1977).

Although there are n output bands in a PCA, the first few bands account for a high proportion of the variance in the data—in some cases, almost 100%. Therefore, PCA is useful for compressing data into fewer bands. In other applications, useful information can be gathered from the principal component bands with the least variance. These bands can show subtle details in the image that were obscured by higher contrast in the original image. These bands may also show regular noise in the data (for example, the striping in old MSS data) (Faust, 1989).

0 255

255

0

PC 1PC 2

90° angle(orthogonal)

Page 525: ERDAS Field Guide

Enhancement 495

Computing Principal Components To compute a principal components transformation, a linear transformation is performed on the data. This means that the coordinates of each pixel in spectral space (the original data file values) are recomputed using a linear equation. The result of the transformation is that the axes in n-dimensional spectral space are shifted and rotated to be relative to the axes of the ellipse. To perform the linear transformation, the eigenvectors and eigenvalues of the n principal components must be mathematically derived from the covariance matrix, as shown in the following equation:

E Cov ET = V Where:

Cov = the covariance matrix E = the matrix of eigenvectors T = the transposition functionV = a diagonal matrix of eigenvalues, in which all nondiagonal

elements are zeros

V is computed so that its nonzero elements are ordered from greatest to least, so that

v1 > v2 > v3... > vn Source: Faust, 1989

A full explanation of this computation can be found in Gonzalez and Wintz, 1977.

The matrix V is the covariance matrix of the output principal component file. The zeros represent the covariance between bands (there is none), and the eigenvalues are the variance values for each band. Because the eigenvalues are ordered from v1 to vn, the first eigenvalue is the largest and represents the most variance in the data. Each column of the resulting eigenvector matrix, E, describes a unit-length vector in spectral space, which shows the direction of the principal component (the ellipse axis). The numbers are used as coefficients in the following equation, to transform the original data file values into the principal component values.

V

v1 0 0 ... 00 v2 0 ... 0

...0 0 0 ... vn

=

Page 526: ERDAS Field Guide

496 Enhancement

Where:e = the number of the principal component (first, second) Pe = the output principal component value for principal

component number ek = a particular input band n = the total number of bands dk = an input data file value in band kEke = the eigenvector matrix element at row k, column e

Source: Modified from Gonzalez and Wintz, 1977

Decorrelation Stretch The purpose of a contrast stretch is to:

• alter the distribution of the image DN values within the 0 - 255 range of the display device, and

• utilize the full range of values in a linear fashion.

The decorrelation stretch stretches the principal components of an image, not to the original image. A principal components transform converts a multiband image into a set of mutually orthogonal images portraying inter-band variance. Depending on the DN ranges and the variance of the individual input bands, these new images (PCs) occupy only a portion of the possible 0 - 255 data range. Each PC is separately stretched to fully utilize the data range. The new stretched PC composite image is then retransformed to the original data areas. Either the original PCs or the stretched PCs may be saved as a permanent image file for viewing after the stretch.

NOTE: Storage of PCs as floating point, single precision is probably appropriate in this case.

Tasseled Cap The different bands in a multispectral image can be visualized as defining an N-dimensional space where N is the number of bands. Each pixel, positioned according to its DN value in each band, lies within the N-dimensional space. This pixel distribution is determined by the absorption/reflection spectra of the imaged material. This clustering of the pixels is termed the data structure (Crist and Kauth, 1986).

Pe dkEke

k 1=

n

∑=

Page 527: ERDAS Field Guide

Enhancement 497

See "Raster Data" on page 1 for more information on absorption/reflection spectra. See the discussion on Principal Components Analysis on page 492.

The data structure can be considered a multidimensional hyperellipsoid. The principal axes of this data structure are not necessarily aligned with the axes of the data space (defined as the bands of the input image). They are more directly related to the absorption spectra. For viewing purposes, it is advantageous to rotate the N-dimensional space such that one or two of the data structure axes are aligned with the Viewer X and Y axes. In particular, you could view the axes that are largest for the data structure produced by the absorption peaks of special interest for the application.For example, a geologist and a botanist are interested in different absorption features. They would want to view different data structures and therefore, different data structure axes. Both would benefit from viewing the data in a way that would maximize visibility of the data structure of interest. The Tasseled Cap transformation offers a way to optimize data viewing for vegetation studies. Research has produced three data structure axes that define the vegetation information content (Crist et al, 1986, Crist and Kauth, 1986):

• Brightness—a weighted sum of all bands, defined in the direction of the principal variation in soil reflectance.

• Greenness—orthogonal to brightness, a contrast between the near-infrared and visible bands. Strongly related to the amount of green vegetation in the scene.

• Wetness—relates to canopy and soil moisture (Lillesand and Kiefer, 1987).

A simple calculation (linear combination) then rotates the data space to present any of these axes to you. These rotations are sensor-dependent, but once defined for a particular sensor (say Landsat 4 TM), the same rotation works for any scene taken by that sensor. The increased dimensionality (number of bands) of TM vs. MSS allowed Crist et al (Crist et al, 1986) to define three additional axes, termed Haze, Fifth, and Sixth. Lavreau (Lavreau, 1991) has used this haze parameter to devise an algorithm to dehaze Landsat imagery. The Tasseled Cap algorithm implemented in the Image Interpreter provides the correct coefficient for MSS, TM4, and TM5 imagery. For TM4, the calculations are:

Page 528: ERDAS Field Guide

498 Enhancement

Brightness = .3037(TM1) + .2793)(TM2) + .4743 (TM3) + .5585 (TM4) + .5082 (TM5) + .1863 (TM7)

Greenness = -.2848 (TM1) - .2435 (TM2) - .5436 (TM3) + .7243 (TM4) + .0840 (TM5) - .1800 (TM7)

Wetness = .1509 (TM1) + .1973 (TM2) + .3279 (TM3) + .3406 (TM4) - .7112 (TM5) - .4572 (TM7)

Haze = .8832 (TM1) - .0819 (TM2) - .4580 (TM3) - .0032 (TM4) - .0563 (TM5) + .0130 (TM7)

Source: Modified from Crist et al, 1986, Jensen, 1996

RGB to IHS The color monitors used for image display on image processing systems have three color guns. These correspond to red, green, and blue (R,G,B), the additive primary colors. When displaying three bands of a multiband data set, the viewed image is said to be in R,G,B space.However, it is possible to define an alternate color space that uses intensity (I), hue (H), and saturation (S) as the three positioned parameters (in lieu of R,G, and B). This system is advantageous in that it presents colors more nearly as perceived by the human eye.

• Intensity is the overall brightness of the scene (like PC-1) and varies from 0 (black) to 1 (white).

• Saturation represents the purity of color and also varies linearly from 0 to 1.

• Hue is representative of the color or dominant wavelength of the pixel. It varies from 0 at the red midpoint through green and blue back to the red midpoint at 360. It is a circular dimension (see Figure 153). In Figure 153, 0 to 255 is the selected range; it could be defined as any data range. However, hue must vary from 0 to 360 to define the entire sphere (Buchanan, 1979).

Page 529: ERDAS Field Guide

Enhancement 499

Figure 153: Intensity, Hue, and Saturation Color Coordinate System

Source: Buchanan, 1979

To use the RGB to IHS transform, use the RGB to IHS function from Image Interpreter.

The algorithm used in the Image Interpreter RGB to IHS transform is (Conrac Corporation, 1980):

Where:R,G,B are each in the range of 0 to 1.0.r, g, b are each in the range of 0 to 1.0.M = largest value, r, g, or bm = least value, r, g, or b

NOTE: At least one of the R, G, or B values is 0, corresponding to the color with the largest value, and at least one of the R, G, or B values is 1, corresponding to the color with the least value.

The equation for calculating intensity in the range of 0 to 1.0 is:

INTE

NS

ITY

255

255 0

255,0 Red

Blue

GreenSATURATION

HUE

R M r–M m–---------------= G M g–

M m–---------------= B M b–

M m–---------------=

I M m+2

---------------=

Page 530: ERDAS Field Guide

500 Enhancement

The equations for calculating saturation in the range of 0 to 1.0 are:

The equations for calculating hue in the range of 0 to 360 are:If M = m, H = 0If R = M, H = 60 (2 + b - g)If G = M, H = 60 (4 + r - b)If B = M, H = 60 (6 + g - r)

Where:R,G,B are each in the range of 0 to 1.0.M = largest value, R, G, or Bm = least value, R, G, or B

IHS to RGB The family of IHS to RGB is intended as a complement to the standard RGB to IHS transform. In the IHS to RGB algorithm, a min-max stretch is applied to either intensity (I), saturation (S), or both, so that they more fully utilize the 0 to 1 value range. The values for hue (H), a circular dimension, are 0 to 360. However, depending on the dynamic range of the DN values of the input image, it is possible that I or S or both occupy only a part of the 0 to 1 range. In this model, a min-max stretch is applied to either I, S, or both, so that they more fully utilize the 0 to 1 value range. After stretching, the full IHS image is retransformed back to the original RGB space. As the parameter Hue is not modified, it largely defines what we perceive as color, and the resultant image looks very much like the input image. It is not essential that the input parameters (IHS) to this transform be derived from an RGB to IHS transform. You could define I and/or S as other parameters, set Hue at 0 to 360, and then transform to RGB space. This is a method of color coding other data sets.In another approach (Daily, 1983), H and I are replaced by low- and high-frequency radar imagery. You can also replace I with radar intensity before the IHS to RGB transform (Croft (Holcomb), 1993). Chavez evaluates the use of the IHS to RGB transform to resolution merge Landsat TM with SPOT panchromatic imagery (Chavez et al, 1991).

NOTE: Use the Spatial Modeler for this analysis.

S 0= If M m=,

S M m–M m+---------------= If I 0.5≤,

S M m–2 M– m–------------------------= If I 0.5>,

Page 531: ERDAS Field Guide

Enhancement 501

See the previous section on RGB to IHS transform for more information.

The algorithm used by ERDAS IMAGINE for the IHS to RGB function is (Conrac Corporation, 1980):Given: H in the range of 0 to 360; I and S in the range of 0 to 1.0

If I ≤ 0.5, M = I (1 + S)If I > 0.5, M = I + S - I (S)m = 2 * 1 - M

The equations for calculating R in the range of 0 to 1.0 are:If H < 60, R = m + (M - m)(H ÷ 60)If 60 ≤ H < 180, R = MIf 180 ≤ H < 240, R = m + (M - m)((240 - H )÷ 60)If 240 ≤ H ≤ 360, R = m

The equations for calculating G in the range of 0 to 1.0 are:If H < 120, G = mIf 120 ≤ H < 180, m + (M - m)((H - 120) ÷ 60)If 180 ≤ H < 300, G = MIf 300 ≤ H ≤ 360, G = m + (M-m)((360 - H) ÷ 60)

Equations for calculating B in the range of 0 to 1.0:If H < 60, B = MIf 60 ≤ H < 120, B = m + (M - m)((120 - H) ÷ 60)If 120 ≤ H < 240, B = mIf 240 ≤ H < 300, B = m + (M - m)((H - 240) ÷ 60)If 300 ≤ H ≤ 360, B = M

Indices Indices are used to create output images by mathematically combining the DN values of different bands. These may be simplistic:

(Band X - Band Y)or more complex:

In many instances, these indices are ratios of band DN values:

Band X - Band YBand X + Band Y

Band X Band Y

Page 532: ERDAS Field Guide

502 Enhancement

These ratio images are derived from the absorption/reflection spectra of the material of interest. The absorption is based on the molecular bonds in the (surface) material. Thus, the ratio often gives information on the chemical composition of the target.

See "Raster Data" on page 1 for more information on the absorption/reflection spectra.

Applications

• Indices are used extensively in mineral exploration and vegetation analysis to bring out small differences between various rock types and vegetation classes. In many cases, judiciously chosen indices can highlight and enhance differences that cannot be observed in the display of the original color bands.

• Indices can also be used to minimize shadow effects in satellite and aircraft multispectral images. Black and white images of individual indices or a color combination of three ratios may be generated.

• Certain combinations of TM ratios are routinely used by geologists for interpretation of Landsat imagery for mineral type. For example: Red 5/7, Green 5/4, Blue 3/1.

Integer Scaling ConsiderationsThe output images obtained by applying indices are generally created in floating point to preserve all numerical precision. If there are two bands, A and B, then:

ratio = A/BIf A>>B (much greater than), then a normal integer scaling would be sufficient. If A>B and A is never much greater than B, scaling might be a problem in that the data range might only go from 1 to 2 or from 1 to 3. In this case, integer scaling would give very little contrast. For cases in which A<B or A<<B, integer scaling would always truncate to 0. All fractional data would be lost. A multiplication constant factor would also not be very effective in seeing the data contrast between 0 and 1, which may very well be a substantial part of the data image. One approach to handling the entire ratio range is to actually process the function:

ratio = atan(A/B)This would give a better representation for A/B < 1 as well as for A/B > 1.

Page 533: ERDAS Field Guide

Enhancement 503

Index ExamplesThe following are examples of indices that have been preprogrammed in the Image Interpreter in ERDAS IMAGINE:

• IR/R (infrared/red)

• SQRT (IR/R)

• Vegetation Index = IR-R

• Normalized Difference Vegetation Index (NDVI) =

• Transformed NDVI (TNDVI) =

• Iron Oxide = TM 3/1

• Clay Minerals = TM 5/7

• Ferrous Minerals = TM 5/4

• Mineral Composite = TM 5/7, 5/4, 3/1

• Hydrothermal Composite = TM 5/7, 3/1, 4/3

Source: Modified from Sabins, 1987; Jensen, 1996; Tucker, 1979The following table shows the infrared (IR) and red (R) band for some common sensors (Tucker, 1979, Jensen, 1996):

Image AlgebraImage algebra is a general term used to describe operations that combine the pixels of two or more raster layers in mathematical combinations. For example, the calculation:

(infrared band) - (red band)DNir - DNred

Sensor IR Band R Band

Landsat MSS 7 5

SPOT XS 3 2

Landsat TM 4 3

NOAA AVHRR 2 1

IR R–IR R+----------------

IR R–IR R+---------------- 0.5+

Page 534: ERDAS Field Guide

504 Enhancement

yields a simple, yet very useful, measure of the presence of vegetation. At the other extreme is the Tasseled Cap calculation (described in the following pages), which uses a more complicated mathematical combination of as many as six bands to define vegetation. Band ratios, such as:

are also commonly used. These are derived from the absorption spectra of the material of interest. The numerator is a baseline of background absorption and the denominator is an absorption peak.

See "Raster Data" on page 1 for more information on absorption/reflection spectra.

NDVI is a combination of addition, subtraction, and division:

Hyperspectral Image Processing

For a complete treatment of hyperspectral image processing, please see the IMAGINE Spectral Analysis™User’s Guide.

Independent Component Analysis

Any given remote sensing image can be decomposed into several features. The term ‘feature’ here refers to remote sensing scene objects (for example, vegetation types or urban materials) with similar spectral characteristics. Therefore, the main objective of a feature extraction technique is to accurately retrieve these features. The extracted features can be subsequently utilized for improving the performance of various remote sensing applications (for example, classification, target detection, unmixing, and so forth).Independent component analysis (ICA) is a high order feature extraction technique. Principal Components Analysis (PCA) and Minimum Noise Fraction (MNF) model the data using a Gaussian distribution, however most remote sensing data do not follow a Gaussian distribution. ICA exploits the higher order statistical characteristics of multispectral and hyperspectral imagery such as skewness and kurtosis. Skewness is a measure of asymmetry in an image histogram. Kurtosis is a measure of peakedness or flatness of an image histogram (that is, departure from a Normal Distribution).

TM5TM7------------ = clay minerals

NDVI IR R–IR R+----------------=

Page 535: ERDAS Field Guide

Enhancement 505

Unlike the Principal Component axes obtained by PCA which exhibit a precedence in their order, (that is, the direction of the first principal axis denotes a representation with maximum data variance, second principal axis with second highest data variance and so forth), the labeling of Independent Component axes does not imply any order. Additionally, it is worth noting that these axes are not restricted to being orthogonal and thus lead to transformed data that is not only uncorrelated (second order statistics) but also independent (higher order statistics).ICA performs a linear transformation of the spectral bands such that the resulting components are decorrelated and independent. Each independent component (IC) will contain information corresponding to a specific feature in the original image. For a detailed description on the mathematical formulation of ICA refer to Shah, C. A., 2003 and Common, P., 1994.

Component Ordering It is always desirable to have the number of components greater than the number of features in order to ensure that all the features are recovered as ICs. In cases where the number of features happens to be smaller than the number of components, the additional components will contain very little feature information and will resemble a noisy image. In ERDAS IMAGINE, you are provided with the option of ordering the ICs so that the noisy images will be the last few components and can be easily eliminated from further analysis. The available options for component ordering are:

• None

• Correlation Coefficient

• Skewness

• Kurtosis

• Combinations of the above

• Entropy

• Negentropy

None: No ordering is applied to the components; as a result, the ICs appear in an arbitrary order. Noisy components may occur at any position in the image stack.Three basic statistical measures provided for component ordering are as follows

Page 536: ERDAS Field Guide

506 Enhancement

Correlation Coefficient: Correlation coefficient (Rxy) between two images X and Y is defined as:

Where:P is the total number of image pixels.

The correlation coefficient is a measure of similarity between two images; the higher the correlation the greater is the similarity between their pixel values. The ICs are ordered based on their correlation with the spectral bands. ICs with low correlation correspond to noisy images and will therefore be lower in the image stack (that is, higher band numbers).

Skewness: Skewness is a measure of asymmetry in an image histogram. The skewness of image X is defined as

Output bands are ordered by increasing asymmetry.

Kurtosis: Kurtosis is a measure of peakedness or flatness of an image histogram (that is, departure from Normal Distribution). The kurtosis of image X is defined as:

RXY

Xi Xi–( ) Yi Yi–( )i 1=

P

Xi Xi–( )2

i 1=

P

∑ Yi Yi–( )2

i 1=

P

-------------------------------------------------------------------------- 0 RXY 1≤ ≤,=

skewnessx1

P 1–------------

Xi Xi–( )3

i 1=

P

σx3

--------------------------------- 0 skewnessx ∞<≤,=

Page 537: ERDAS Field Guide

Enhancement 507

An image with a normally distributed histogram has zero skewness and kurtosis.

Combinations:Two combinations of the above three measures are also options for component ordering in ERDAS IMAGINE:

| |

| |

Entropy:Entropy is a measure of image information content. Entropy of image X is defined as:

Where:G is the number of gray levelsK is the gray levelP is probability

kurtosisx1

P 1–------------

Xi Xi–( )4

i 1=

P

σx4

--------------------------------- 3– 3– kurtosisx ∞<≤,=

X Xskewness kurtosis×

XY X XR skewness kurtosis× ×

entropyx P k( )log2 P k( )( )k 0=

G 1–

∑–=

Page 538: ERDAS Field Guide

508 Enhancement

Negentropy: Negentropy is a measure of how far the distribution of a particular image departs from a normal distribution. Negentropy of image X is defined as:

and is proportional to its skewness and kurtosis. Lower values of skewness/kurtosis/negentropy correspond to ICs resembling noisy images.

Band Generation for Multispectral Imagery

It must be noted that the number of desired components cannot exceed the number of spectral bands in the imagery. This should not be of concern when processing hyperspectral imagery, where the number of spectral bands is significantly higher compared to the number of features in the scene. However, feature extraction from multispectral imagery necessitates the generation of additional spectral bands as explained below. A linear combination of the original spectral bands will not lead to additional information required for ICA feature extraction from multispectral imagery. Therefore, you should generate additional

spectral bands through non-linear operations such as , ,

, , (where is a multispectral band and ).

Remote Sensing Applications for ICs

More information and examples of Spectral Unmixing, Shadow Removal, and Classification may be found in Shah, et al, 2007.

• Visual analysis:

ICs can be used to improve the visual interpretability through component color coding. A similar approach can be used for enhanced visual interpretability of hyperspectral imagery employing ICs.

• Resolution merge:

Improved integration of imagery at different spatial resolution can be attained by substituting a high spatial resolution image for an IC followed by an inverse transformation.

• Spectral unmixing:

negentropyx112------ skewnessx( )2 1

48------ kurtosisx( )2+=

( )log iX iX

2iX

i

j

XX i jX X⋅ iX i j≠

Page 539: ERDAS Field Guide

Enhancement 509

ICA can be employed for linear spectral unmixing when you have no prior information regarding the spectral response of features present in the scene. The formulation of each band in the case of three features can be expressed as

where is the sensor response to feature i at wavelength .

and the ith feature estimated by ICA correspond to the spectral response and abundance respectively of the ith feature in spectral band at wavelength . Hence, ICA extracted features can be further analyzed for identifying the proportion of each feature in a pixel.

• Shadow detection:

High spatial resolution multispectral images necessitate shadow detection to facilitate improved feature analysis. By employing band combinations (for example, band ratio) for band generation, the spectral difference between the shadow and the shadow occluded features can be enhanced. ICs obtained from these bands would recover the shadow in one of the components.

• Land use/ land cover classification:

ICs can be further analyzed based on their spectral, textural, and contextual information in order to obtain an improved thematic map.

• Multi temporal data analysis:

Since feature based change detection techniques necessitate extraction of features with high accuracy, ICs are well suited in the analysis of multi temporal data.

• Anomaly/Target detection:

In cases where there is no prior information regarding material of the target features present in the scene, spectra from libraries can not be used for detecting them. ICA, however, when employed for such applications, will remove the anomalous features (that is, features with spectral response significantly different from other features present in the scene). Those anomalous features are contained in the independent components. These components can be further analyzed for improved anomaly/target detection.

( ) ( ) ( ) ( )1 2 3 1 2 3spectral band A feature A feature A featureλ λ λ λ= ⋅ + ⋅ + ⋅

( )iA λ λ

( )iA λ

λ

Page 540: ERDAS Field Guide

510 Enhancement

Tips and Tricks • Band generation:

Use of meaningful band combinations for non-linear band generation in case of multispectral imagery improves the performance of ICA in extracting features. For example, employing a Normalized Differential Vegetation Index (NDVI) as an additional band would certainly enhance the performance of ICA in extracting vegetation features.

• Desired number of components:

A typical scene imaged by a hyperspectral sensor would not contain more than 10 to 15 features. Hence, the number of desired components should be restricted to less than 20 in order to ensure accurate and efficient results.

• Image background with zero pixel values:

In cases where the images have background with zero values, use a subset of that image by eliminating background pixels for ICA feature extraction.

• Visual inspection of ICs:

In addition to using any of the component ordering techniques, the ICs may be visually inspected to ensure that they do not resemble noisy images. Also, the number of desired components impacts the result of ICA. Let us consider the following two scenarios: First, ICA is performed on an imagery and the desired number of components is 3. Next, the ICA is performed on the same imagery and this time the number of desired components is 4. The 3 ICs obtained in the first case would not be identical to any of the 4 ICs in the second case.

Page 541: ERDAS Field Guide

Enhancement 511

Fourier Analysis Image enhancement techniques can be divided into two basic categories: point and neighborhood. Point techniques enhance the pixel based only on its value, with no concern for the values of neighboring pixels. These techniques include contrast stretches (nonadaptive), classification, and level slices. Neighborhood techniques enhance a pixel based on the values of surrounding pixels. As a result, these techniques require the processing of a possibly large number of pixels for each output pixel. The most common way of implementing these enhancements is via a moving window convolution. However, as the size of the moving window increases, the number of requisite calculations becomes enormous. An enhancement that requires a convolution operation in the spatial domain can be implemented as a simple multiplication in frequency space—a much faster calculation.In ERDAS IMAGINE, the FFT is used to convert a raster image from the spatial (normal) domain into a frequency domain image. The FFT calculation converts the image into a series of two-dimensional sine waves of various frequencies. The Fourier image itself cannot be easily viewed, but the magnitude of the image can be calculated, which can then be displayed either in the Viewer or in the FFT Editor. Analysts can edit the Fourier image to reduce noise or remove periodic features, such as striping. Once the Fourier image is edited, it is then transformed back into the spatial domain by using an IFFT. The result is an enhanced version of the original image. This section focuses on the Fourier editing techniques available in the FFT Editor. Some rules and guidelines for using these tools are presented in this document. Also included are some examples of techniques that generally work for specific applications, such as striping.

NOTE: You may also want to refer to the works cited at the end of this section for more information.

The basic premise behind a Fourier transform is that any one-dimensional function, f(x) (which might be a row of pixels), can be represented by a Fourier series consisting of some combination of sine and cosine terms and their associated coefficients. For example, a line of pixels with a high spatial frequency gray scale pattern might be represented in terms of a single coefficient multiplied by a sin(x) function. High spatial frequencies are those that represent frequent gray scale changes in a short pixel distance. Low spatial frequencies represent infrequent gray scale changes that occur gradually over a relatively large number of pixel distances. A more complicated function, f(x), might have to be represented by many sine and cosine terms with their associated coefficients.

Page 542: ERDAS Field Guide

512 Enhancement

Figure 154: One-Dimensional Fourier Analysis

Figure 154 shows how a function f(x) can be represented as a linear combination of sine and cosine. In this example the function is a square wave whose cosine coefficients are zero leaving only sine terms. The first three terms of the Fourier series are plotted in the upper right graph and the plot of the sum is shown below it. After nine iterations, the Fourier series is approaching the original function.A Fourier transform is a linear transformation that allows calculation of the coefficients necessary for the sine and cosine terms to adequately represent the image. This theory is used extensively in electronics and signal processing, where electrical signals are continuous and not discrete. Therefore, DFT has been developed. Because of the computational load in calculating the values for all the sine and cosine terms along with the coefficient multiplications, a highly efficient version of the DFT was developed and called the FFT.To handle images which consist of many one-dimensional rows of pixels, a two-dimensional FFT has been devised that incrementally uses one-dimensional FFTs in each direction and then combines the result. These images are symmetrical about the origin.

ApplicationsFourier transformations are typically used for the removal of noise such as striping, spots, or vibration in imagery by identifying periodicities (areas of high spatial frequency). Fourier editing can be used to remove regular errors in data such as those caused by sensor anomalies (for example, striping). This analysis technique can also be used across bands as another form of pattern/feature recognition.

0 π 2π

Original Function f(x) Fourier Series of f(x)

Sum of first 3 terms in series

13--- 3xsin

15--- 5xsin

xsin

0 π 2π

Sum of first 9 terms in series

0 π 2π0 π 2π

Page 543: ERDAS Field Guide

Enhancement 513

FFT The FFT calculation is:

Where:M = the number of pixels horizontallyN = the number of pixels verticallyu,v = spatial frequency variablese = 2.71828, the natural logarithm basej = the imaginary component of a complex number

The number of pixels horizontally and vertically must each be a power of two. If the dimensions of the input image are not a power of two, they are padded up to the next highest power of two. There is more information about this later in this section.Source: Modified from Oppenheim and Schafer, 1975; Press et al, 1988.

Images computed by this algorithm are saved with an .fft file extension.

You should run a Fourier Magnitude transform on an .fft file before viewing it in the Viewer. The FFT Editor automatically displays the magnitude without further processing.

Fourier Magnitude The raster image generated by the FFT calculation is not an optimum image for viewing or editing. Each pixel of a fourier image is a complex number (that is, it has two components: real and imaginary). For display as a single image, these components are combined in a root-sum of squares operation. Also, since the dynamic range of Fourier spectra vastly exceeds the range of a typical display device, the Fourier Magnitude calculation involves a logarithmic function. Finally, a Fourier image is symmetric about the origin (u, v = 0, 0). If the origin is plotted at the upper left corner, the symmetry is more difficult to see than if the origin is at the center of the image. Therefore, in the Fourier magnitude image, the origin is shifted to the center of the raster array.In this transformation, each .fft layer is processed twice. First, the maximum magnitude, |X|max, is computed. Then, the following computation is performed for each FFT element magnitude x:

F u v,( ) f x y,( )e j2πux M j2πvy N⁄–⁄–[ ]

y 0=

N 1–

∑x 0=

M 1–

∑←

Page 544: ERDAS Field Guide

514 Enhancement

Where:x = input FFT elementy = the normalized log magnitude of the FFT element |x|max = the maximum magnitudee = 2.71828, the natural logarithm base| | = the magnitude operator

This function was chosen so that y would be proportional to the logarithm of a linear function of x, with y(0)=0 and y (|x|max) = 255.

In Figure 155, Image A is one band of a badly striped Landsat TM scene. Image B is the Fourier Magnitude image derived from the Landsat image.

Figure 155: Example of Fourier Magnitude

Note that, although Image A has been transformed into Image B, these raster images are very different symmetrically. The origin of Image A is at (x, y) = (0, 0) in the upper left corner. In Image B, the origin (u, v) = (0, 0) is in the center of the raster. The low frequencies are plotted near this origin while the higher frequencies are plotted further out. Generally, the majority of the information in an image is in the low frequencies. This is indicated by the bright area at the center (origin) of the Fourier image.

y x( ) 255.0ln xx max-------------⎝ ⎠

⎛ ⎞ e 1–( ) 1+=

Image A Image B origin

origin

Page 545: ERDAS Field Guide

Enhancement 515

It is important to realize that a position in a Fourier image, designated as (u, v), does not always represent the same frequency, because it depends on the size of the input raster image. A large spatial domain image contains components of lower frequency than a small spatial domain image. As mentioned, these lower frequencies are plotted nearer to the center (u, v = 0, 0) of the Fourier image. Note that the units of spatial frequency are inverse length, for example, m-1.The sampling increments in the spatial and frequency domain are related by:

Where:M = horizontal image size in pixelsN = vertical image size in pixelsΔ x = pixel sizeΔ y = pixel size

For example, converting a 512 × 512 Landsat TM image (pixel size = 28.5m) into a Fourier image:

If the Landsat TM image was 1024 × 1024:

u or v Frequency

0 0

1 6.85 × 10-5 × m-1

2 13.7 × 10-5 × m-1

u or v Frequency

0 0

Δu 1MΔx------------=

Δv 1NΔy-----------=

Δu Δv 1512 28.5×------------------------- 6.85 10 5– m 1–×= = =

Δu Δv 11024 28.5×---------------------------- 3.42 10 5–× m 1–= = =

Page 546: ERDAS Field Guide

516 Enhancement

So, as noted above, the frequency represented by a (u, v) position depends on the size of the input image. For the above calculation, the sample images are 512 × 512 and 1024 × 1024 (powers of two). These were selected because the FFT calculation requires that the height and width of the input image be a power of two (although the image need not be square). In practice, input images usually do not meet this criterion. Three possible solutions are available in ERDAS IMAGINE:

• Subset the image.

• Pad the image—the input raster is increased in size to the next power of two by imbedding it in a field of the mean value of the entire input image.

• Resample the image so that its height and width are powers of two.

Figure 156: The Padding Technique

The padding technique is automatically performed by the FFT program. It produces a minimum of artifacts in the output Fourier image. If the image is subset using a power of two (that is, 64 × 64, 128 × 128, 64 × 128), no padding is used.

IFFT The IFFT computes the inverse two-dimensional FFT of the spectrum stored.

• The input file must be in the compressed .fft format described earlier (that is, output from the FFT or FFT Editor).

• If the original image was padded by the FFT program, the padding is automatically removed by IFFT.

1 3.42 × 10-5 × m-1

2 6.85 × 10-5 × m-1

u or v Frequency

400

300512

512

mean value

Page 547: ERDAS Field Guide

Enhancement 517

• This program creates (and deletes, upon normal termination) a temporary file large enough to contain one entire band of .fft data.The specific expression calculated by this program is:

Where:M = the number of pixels horizontallyN = the number of pixels verticallyu, v = spatial frequency variablese = 2.71828, the natural logarithm base

Source: Modified from Oppenheim and Schafer, 1975 and Press et al, 1988.

Images computed by this algorithm are saved with an .ifft.img file extension by default.

Filtering Operations performed in the frequency (Fourier) domain can be visualized in the context of the familiar convolution function. The mathematical basis of this interrelationship is the convolution theorem, which states that a convolution operation in the spatial domain is equivalent to a multiplication operation in the frequency domain:

g(x, y) = h(x, y) * f(x, y) is equivalent to G(u, v) = H(u, v) × F(u, v)

Where:f(x, y) = input imageh(x, y) = position invariant operation (convolution kernel)g(x, y) = output imageG, F, H = Fourier transforms of g, f, h

The names high-pass, low-pass, high-frequency indicate that these convolution functions derive from the frequency domain.

f x y,( )1

N1N2--------------- F u v,( )ej2πux M j2πvy N⁄+⁄

[ ]

v 0=

N 1–

∑u 0=

M 1–

∑←

0 x M 1 0 y N≤ ≤, 1––≤ ≤

Page 548: ERDAS Field Guide

518 Enhancement

Low-Pass FilteringThe simplest example of this relationship is the low-pass kernel. The name, low-pass kernel, is derived from a filter that would pass low frequencies and block (filter out) high frequencies. In practice, this is easily achieved in the spatial domain by the M = N = 3 kernel:

Obviously, as the size of the image and, particularly, the size of the low-pass kernel increases, the calculation becomes more time-consuming. Depending on the size of the input image and the size of the kernel, it can be faster to generate a low-pass image via Fourier processing. Figure 157 compares Direct and Fourier domain processing for finite area convolution.

Figure 157: Comparison of Direct and Fourier Domain Processing

Source: Pratt, 1991In the Fourier domain, the low-pass operation is implemented by attenuating the pixels’ frequencies that satisfy:

D0 is often called the cutoff frequency.

1 1 11 1 11 1 1

0

4

8

12

16

200 400 600 800 1000 1200

Siz

e of

nei

ghbo

rhoo

d fo

r cal

cula

tion

Size of input image

Fourier processing more efficient

Direct processing more efficient

u2 v2+ D02>

Page 549: ERDAS Field Guide

Enhancement 519

As mentioned, the low-pass information is concentrated toward the origin of the Fourier image. Thus, a smaller radius (r) has the same effect as a larger N (where N is the size of a kernel) in a spatial domain low-pass convolution.As was pointed out earlier, the frequency represented by a particular u, v (or r) position depends on the size of the input image. Thus, a low-pass operation of r = 20 is equivalent to a spatial low-pass of various kernel sizes, depending on the size of the input image. For example:

This table shows that using a window on a 64 × 64 Fourier image with a radius of 50 as the cutoff is the same as using a 3 × 3 low-pass kernel on a 64 × 64 spatial domain image.

High-Pass FilteringJust as images can be smoothed (blurred) by attenuating the high-frequency components of an image using low-pass filters, images can be sharpened and edge-enhanced by attenuating the low-frequency components using high-pass filters. In the Fourier domain, the high-pass operation is implemented by attenuating the pixels’ frequencies that satisfy:

Image Size Fourier Low-Passr =

Convolution Low-PassN =

64 × 64 50 3

30 3.5

20 5

10 9

5 14

128 × 128 20 13

10 22

256 × 256 20 25

10 42

u2 v2+ D02<

Page 550: ERDAS Field Guide

520 Enhancement

Windows The attenuation discussed above can be done in many different ways. In ERDAS IMAGINE Fourier processing, five window functions are provided to achieve different types of attenuation:

• Ideal

• Bartlett (triangular)

• Butterworth

• Gaussian

• Hanning (cosine)

Each of these windows must be defined when a frequency domain process is used. This application is perhaps easiest understood in the context of the high-pass and low-pass filter operations. Each window is discussed in more detail:

IdealThe simplest low-pass filtering is accomplished using the ideal window, so named because its cutoff point is absolute. Note that in Figure 158 the cross section is ideal.

Figure 158: An Ideal Cross Section

H(u, v) = 1 if D(u, v) ≤ D0

H(u, v) = 0 if D(u, v) > D0

All frequencies inside a circle of a radius D0 are retained completely (passed), and all frequencies outside the radius are completely attenuated. The point D0 is termed the cutoff frequency.

High-pass filtering using the ideal window looks like the following illustration:

H(u,v)

D(u,v)

1

0 D0

gai

n

frequency

Page 551: ERDAS Field Guide

Enhancement 521

Figure 159: High-Pass Filtering Using the Ideal Window

H(u, v) = 0 if D(u, v) ≤ D0

H(u, v) = 1 if D(u, v) > D0

All frequencies inside a circle of a radius D0 are completely attenuated, and all frequencies outside the radius are retained completely (passed). A major disadvantage of the ideal filter is that it can cause ringing artifacts, particularly if the radius (r) is small. The smoother functions (for example, Butterworth and Hanning) minimize this effect.

BartlettFiltering using the Bartlett window is a triangular function, as shown in the following low- and high-pass cross sections:

Figure 160: Filtering Using the Bartlett Window

Butterworth, Gaussian, and HanningThe Butterworth, Gaussian, and Hanning windows are all smooth and greatly reduce the effect of ringing. The differences between them are minor and are of interest mainly to experts. For most normal types of Fourier image enhancement, they are essentially interchangeable.The Butterworth window reduces the ringing effect because it does not contain abrupt changes in value or slope. The following low- and high-pass cross sections illustrate this:

H(u,v)

D(u,v)

1

0 D0

gai

n

frequency

H(u,v)

D(u,v)

1

0 D0

gai

n

frequency

H(u,v)

D(u,v)

1

0 D0

gai

n

frequency

Low-Pass High-Pass

Page 552: ERDAS Field Guide

522 Enhancement

Figure 161: Filtering Using the Butterworth Window

The equation for the low-pass Butterworth window is:

NOTE: The Butterworth window approaches its window center gain asymptotically.

The equation for the Gaussian low-pass window is:

The equation for the Hanning low-pass window is:

Fourier Noise Removal Occasionally, images are corrupted by noise that is periodic in nature. An example of this is the scan lines that are present in some TM images. When these images are transformed into Fourier space, the periodic line pattern becomes a radial line. The Fourier Analysis functions provide two main tools for reducing noise in images:

• editing

• automatic removal of periodic noise

H(u,v)

D(u,v)

1

0

0.5

1 2 3 D0

gai

n

frequency

Low-Pass High-PassH(u,v)

D(u,v)

1

0

0.5

1 2 3 D0

gai

n

frequency

H u v,( ) 11 D u v,( )( ) D0⁄[ ]2n+-----------------------------------------------------=

H u v,( ) e

xD0------⎝ ⎠

⎛ ⎞–2

=

H u v,( ) 12--- 1 πx

2D0----------⎝ ⎠

⎛ ⎞cos+⎝ ⎠⎛ ⎞ for = 0 x 2D0≤ ≤

H u v,( ) 0 otherwise=

Page 553: ERDAS Field Guide

Enhancement 523

EditingIn practice, it has been found that radial lines centered at the Fourier origin (u, v = 0, 0) are best removed using back-to-back wedges centered at (0, 0). It is possible to remove these lines using very narrow wedges with the Ideal window. However, the sudden transitions resulting from zeroing-out sections of a Fourier image causes a ringing of the image when it is transformed back into the spatial domain. This effect can be lessened by using a less abrupt window, such as Butterworth.Other types of noise can produce artifacts, such as lines not centered at u,v = 0,0 or circular spots in the Fourier image. These can be removed using the tools provided in the FFT Editor. As these artifacts are always symmetrical in the Fourier magnitude image, editing tools operate on both components simultaneously. The FFT Editor contains tools that enable you to attenuate a circular or rectangular region anywhere on the image.

Automatic Periodic Noise RemovalThe use of the FFT Editor, as described above, enables you to selectively and accurately remove periodic noise from any image. However, operator interaction and a bit of trial and error are required. The automatic periodic noise removal algorithm has been devised to address images degraded uniformly by striping or other periodic anomalies. Use of this algorithm requires a minimum of input from you.The image is first divided into 128 × 128 pixel blocks. The Fourier Transform of each block is calculated and the log-magnitudes of each FFT block are averaged. The averaging removes all frequency domain quantities except those that are present in each block (that is, some sort of periodic interference). The average power spectrum is then used as a filter to adjust the FFT of the entire image. When the IFFT is performed, the result is an image that should have any periodic noise eliminated or significantly reduced. This method is partially based on the algorithms outlined in Cannon (Cannon, 1983) and Srinivasan et al (Srinivasan et al, 1988).

Select the Periodic Noise Removal option from Image Interpreter to use this function.

Homomorphic Filtering Homomorphic filtering is based upon the principle that an image may be modeled as the product of illumination and reflectance components:

I(x, y) = i(x, y) × r(x, y)

Page 554: ERDAS Field Guide

524 Enhancement

Where:I(x, y) = image intensity (DN) at pixel x, yi(x, y) = illumination of pixel x, yr(x, y) = reflectance at pixel x, y

The illumination image is a function of lighting conditions and shadows. The reflectance image is a function of the object being imaged. A log function can be used to separate the two components (i and r) of the image:

ln I(x, y) = ln i(x, y) + ln r(x, y)This transforms the image from multiplicative to additive superposition. With the two component images separated, any linear operation can be performed. In this application, the image is now transformed into Fourier space. Because the illumination component usually dominates the low frequencies, while the reflectance component dominates the higher frequencies, the image may be effectively manipulated in the Fourier domain.By using a filter on the Fourier image, which increases the high-frequency components, the reflectance image (related to the target material) may be enhanced, while the illumination image (related to the scene illumination) is de-emphasized.

Select the Homomorphic Filter option from Image Interpreter to use this function.

By applying an IFFT followed by an exponential function, the enhanced image is returned to the normal spatial domain. The flow chart in Figure 162 summarizes the homomorphic filtering process in ERDAS IMAGINE.

Figure 162: Homomorphic Filtering Process

InputImage Log Log

Image FFT FourierImage

Butter-worthFilter

IFFTExpo-nential

EnhancedImage

i × r ln i + ln r i = low freq.r = high freq.

i decreasedr increased

FilteredFourierImage

InputImage Log Log

Image FFT FourierImage

Butter-worthFilter

IFFTExpo-nential

EnhancedImage

i × r ln i + ln r i = low freq.r = high freq.

i decreasedr increased

FilteredFourierImage

Page 555: ERDAS Field Guide

Enhancement 525

As mentioned earlier, if an input image is not a power of two, the ERDAS IMAGINE Fourier analysis software automatically pads the image to the next largest size to make it a power of two. For manual editing, this causes no problems. However, in automatic processing, such as the homomorphic filter, the artifacts induced by the padding may have a deleterious effect on the output image. For this reason, it is recommended that images that are not a power of two be subset before being used in an automatic process.

A detailed description of the theory behind Fourier series and Fourier transforms is given in Gonzales and Wintz (Gonzalez and Wintz, 1977). See also Oppenheim (Oppenheim and Schafer, 1975) and Press (Press et al, 1988).

Radar Imagery Enhancement

The nature of the surface phenomena involved in radar imaging is inherently different from that of visible/infrared (VIS/IR) images. When VIS/IR radiation strikes a surface it is either absorbed, reflected, or transmitted. The absorption is based on the molecular bonds in the (surface) material. Thus, this imagery provides information on the chemical composition of the target. When radar microwaves strike a surface, they are reflected according to the physical and electrical properties of the surface, rather than the chemical composition. The strength of radar return is affected by slope, roughness, and vegetation cover. The conductivity of a target area is related to the porosity of the soil and its water content. Consequently, radar and VIS/IR data are complementary; they provide different information about the target area. An image in which these two data types are intelligently combined can present much more information than either image by itself.

See "Raster Data" on page 1 and "Raster and Vector Data Sources" on page 55 for more information on radar data.

This section describes enhancement techniques that are particularly useful for radar imagery. While these techniques can be applied to other types of image data, this discussion focuses on the special requirements of radar imagery enhancement. The ERDAS IMAGINE Radar Interpreter provides a sophisticated set of image processing tools designed specifically for use with radar imagery. This section describes the functions of the ERDAS IMAGINE Radar Interpreter.

For information on the Radar Image Enhancement function, see the section on Radiometric Enhancement on page 463.

Page 556: ERDAS Field Guide

526 Enhancement

Speckle Noise Speckle noise is commonly observed in radar (microwave or millimeter wave) sensing systems, although it may appear in any type of remotely sensed image utilizing coherent radiation. An active radar sensor gives off a burst of coherent radiation that reflects from the target, unlike a passive microwave sensor that simply receives the low-level radiation naturally emitted by targets. Like the light from a laser, the waves emitted by active sensors travel in phase and interact minimally on their way to the target area. After interaction with the target area, these waves are no longer in phase. This is because of the different distances they travel from targets, or single versus multiple bounce scattering.Once out of phase, radar waves can interact to produce light and dark pixels known as speckle noise. Speckle noise must be reduced before the data can be effectively utilized. However, the image processing programs used to reduce speckle noise produce changes in the image.

Because any image processing done before removal of the speckle results in the noise being incorporated into and degrading the image, you should not rectify, correct to ground range, or in any way resample, enhance, or classify the pixel values before removing speckle noise. Functions using Nearest Neighbor are technically permissible, but not advisable.

Since different applications and different sensors necessitate different speckle removal models, ERDAS IMAGINE Radar Interpreter includes several speckle reduction algorithms:

• Mean filter

• Median filter

• Lee-Sigma filter

• Local Region filter

• Lee filter

• Frost filter

• Gamma-MAP filter

NOTE: Speckle noise in radar images cannot be completely removed. However, it can be reduced significantly.

These filters are described in the following sections:

Page 557: ERDAS Field Guide

Enhancement 527

Mean FilterThe Mean filter is a simple calculation. The pixel of interest (center of window) is replaced by the arithmetic average of all values within the window. This filter does not remove the aberrant (speckle) value; it averages it into the data. In theory, a bright and a dark pixel within the same window would cancel each other out. This consideration would argue in favor of a large window size (for example, 7 × 7). However, averaging results in a loss of detail, which argues for a small window size. In general, this is the least satisfactory method of speckle reduction. It is useful for applications where loss of resolution is not a problem.

Median FilterA better way to reduce speckle, but still simplistic, is the Median filter. This filter operates by arranging all DN values in sequential order within the window that you define. The pixel of interest is replaced by the value in the center of this distribution. A Median filter is useful for removing pulse or spike noise. Pulse functions of less than one-half of the moving window width are suppressed or eliminated. In addition, step functions or ramp functions are retained. The effect of Mean and Median filters on various signals is shown (for one dimension) in Figure 163.

Figure 163: Effects of Mean and Median Filters

ORIGINAL MEAN FILTERED MEDIAN FILTERED

Step

Ramp

Single Pulse

Double Pulse

Page 558: ERDAS Field Guide

528 Enhancement

The Median filter is useful for noise suppression in any image. It does not affect step or ramp functions; it is an edge preserving filter (Pratt, 1991). It is also applicable in removing pulse function noise, which results from the inherent pulsing of microwaves. An example of the application of the Median filter is the removal of dead-detector striping, as found in Landsat 4 TM data (Crippen, 1989a).

Local Region FilterThe Local Region filter divides the moving window into eight regions based on angular position (North, South, East, West, NW, NE, SW, and SE). Figure 164 shows a 5 × 5 moving window and the regions of the Local Region filter.

Figure 164: Regions of Local Region Filter

For each region, the variance is calculated as follows:

Source: Nagao and Matsuyama, 1978The algorithm compares the variance values of the regions surrounding the pixel of interest. The pixel of interest is replaced by the mean of all DN values within the region with the lowest variance (that is, the most uniform region). A region with low variance is assumed to have pixels minimally affected by wave interference, yet very similar to the pixel of interest. A region of low variance is probably such for several surrounding pixels.

= pixel of interest

= North region

= NE region

= SW region

VarianceΣ DNx y, Mean–( )2

n 1–-----------------------------------------------=

Page 559: ERDAS Field Guide

Enhancement 529

The result is that the output image is composed of numerous uniform areas, the size of which is determined by the moving window size. In practice, this filter can be utilized sequentially 2 or 3 times, increasing the window size. The resultant output image is an appropriate input to a classification application.

Sigma and Lee FiltersThe Sigma and Lee filters utilize the statistical distribution of the DN values within the moving window to estimate what the pixel of interest should be.Speckle in imaging radar can be mathematically modeled as multiplicative noise with a mean of 1. The standard deviation of the noise can be mathematically defined as:

Standard Deviation = = Coefficient Of Variation = sigma (σ)

The coefficient of variation, as a scene-derived parameter, is used as an input parameter in the Sigma and Lee filters. It is also useful in evaluating and modifying VIS/IR data for input to a 4-band composite image, or in preparing a 3-band ratio color composite (Crippen, 1989a).It can be assumed that imaging radar data noise follows a Gaussian distribution. This would yield a theoretical value for Standard Deviation (SD) of .52 for 1-look radar data and SD = .26 for 4-look radar data. Table 50 gives theoretical coefficient of variation values for various look-average radar scenes:

The Lee filters are based on the assumption that the mean and variance of the pixel of interest are equal to the local mean and variance of all pixels within the moving window you select.The actual calculation used for the Lee filter is:

DNout = [Mean] + K[DNin - Mean]Where:

Mean = average of pixels in a moving window

Table 50: Theoretical Coefficient of Variation Values

# of Looks (scenes) Coef. of Variation Value

1 .52

2 .37

3 .30

4 .26

6 .21

8 .18

VARIANCEMEAN

-----------------------------------

Page 560: ERDAS Field Guide

530 Enhancement

The variance of x [Var (x)] is defined as:

Source: Lee, 1981The Sigma filter is based on the probability of a Gaussian distribution. It is assumed that 95.5% of random samples are within a 2 standard deviation (2 sigma) range. This noise suppression filter replaces the pixel of interest with the average of all DN values within the moving window that fall within the designated range.As with all the radar speckle filters, you must specify a moving window size. The center pixel of the moving window is the pixel of interest. As with the Statistics filter, a coefficient of variation specific to the data set must be input. Finally, you must specify how many standard deviations to use (2, 1, or 0.5) to define the accepted range.The statistical filters (Sigma and Statistics) are logically applicable to any data set for preprocessing. Any sensor system has various sources of noise, resulting in a few erratic pixels. In VIS/IR imagery, most natural scenes are found to follow a normal distribution of DN values, thus filtering at 2 standard deviations should remove this noise. This is particularly true of experimental sensor systems that frequently have significant noise problems. These speckle filters can be used iteratively. You must view and evaluate the resultant image after each pass (the data histogram is useful for this), and then decide if another pass is appropriate and what parameters to use on the next pass. For example, three passes of the Sigma filter with the following parameters are very effective when used with any type of data:

Similarly, there is no reason why successive passes must be of the same filter. The following sequence is useful prior to a classification:

K Var x( )

Mean[ ]2σ2 Var x( )+----------------------------------------------------=

Var x( ) Variance within window[ ] Mean within window[ ]2+Sigma[ ]2 1+

-------------------------------------------------------------------------------------------------------------------------------⎝ ⎠⎜ ⎟⎛ ⎞

Mean within window[ ]2–=

Pass Sigma Value Sigma Multiplier Window Size

1 0.26 0.5 3 × 3

2 0.26 1 5 × 5

3 0.26 2 7 × 7

Page 561: ERDAS Field Guide

Enhancement 531

With all speckle reduction filters there is a playoff between noise reduction and loss of resolution. Each data set and each application have a different acceptable balance between these two factors. The ERDAS IMAGINE filters have been designed to be versatile and gentle in reducing noise (and resolution).

Frost FilterThe Frost filter is a minimum mean square error algorithm that adapts to the local statistics of the image. The local statistics serve as weighting parameters for the impulse response of the filter (moving window). This algorithm assumes that noise is multiplicative with stationary statistics.The formula used is:

Where:

andK = normalization constant

= local meanσ = local variance

= image coefficient of variation value|t| = |X-X0| + |Y-Y0|n = moving window size

Source: Lopes et al, 1990

Filter Pass Sigma Value

Sigma Multiplier Window Size

Lee 1 0.26 NA 3 × 3

Lee 2 0.26 NA 5 × 5

Local Region

3 NA NA 5 × 5 or 7 × 7

DN Kαe α t–

n n×∑=

α 4 nσ2

⁄( ) σ2 I 2

⁄( )=

I

σ

Page 562: ERDAS Field Guide

532 Enhancement

Gamma-MAP FilterThe Maximum A Posteriori (MAP) filter attempts to estimate the original pixel DN, which is assumed to lie between the local average and the degraded (actual) pixel DN. MAP logic maximizes the a posteriori probability density function with respect to the original image.Many speckle reduction filters (for example, Lee, Lee-Sigma, Frost) assume a Gaussian distribution for the speckle noise. Recent work has shown this to be an invalid assumption. Natural vegetated areas have been shown to be more properly modeled as having a Gamma distributed cross section. This algorithm incorporates this assumption. The exact formula used is the cubic equation:

Where:Î = sought value

= local meanDN = input valueσ = original image variance

Source: Frost et al, 1982

Edge Detection Edge and line detection are important operations in digital image processing. For example, geologists are often interested in mapping lineaments, which may be fault lines or bedding structures. For this purpose, edge and line detection are major enhancement techniques.In selecting an algorithm, it is first necessary to understand the nature of what is being enhanced. Edge detection could imply amplifying an edge, a line, or a spot (see Figure 165).

I3

I I2

– σ I DN–( ) 0=+

I

Page 563: ERDAS Field Guide

Enhancement 533

Figure 165: One-dimensional, Continuous Edge, and Line Models

• Ramp edge—an edge modeled as a ramp, increasing in DN value from a low to a high level, or vice versa. Distinguished by DN change, slope, and slope midpoint.

• Step edge—a ramp edge with a slope angle of 90 degrees.

• Line—a region bounded on each end by an edge; width must be less than the moving window size.

• Roof edge—a line with a width near zero.

The models in Figure 165 represent ideal theoretical edges. However, real data values vary to produce a more distorted edge due to sensor noise or vibration (see Figure 166). There are no perfect edges in raster data, hence the need for edge detection algorithms.

Figure 166: A Noisy Edge Superimposed on an Ideal Edge

Edge detection algorithms can be broken down into 1st-order derivative and 2nd-order derivative operations. Figure 167 shows ideal one-dimensional edge and line intensity curves with the associated 1st-order and 2nd-order derivatives.

Ramp edge

DN change

Step edge

Line

width

Roof edge

slope

slope midpoint

90o

width near 0

DN change

DN change

DN change

x or yD

N V

alue

DN

Val

ueD

N V

alue

DN

Val

uex or y

x or yx or y

Inte

nsity

Actual data valuesIdeal model step edge

Page 564: ERDAS Field Guide

534 Enhancement

Figure 167: Edge and Line Derivatives

The 1st-order derivative kernel(s) derives from the simple Prewitt kernel:

The 2nd-order derivative kernel(s) derives from Laplacian operators:

1st-Order Derivatives (Prewitt)The ERDAS IMAGINE Radar Interpreter utilizes sets of template matching operators. These operators approximate to the eight possible compass orientations (North, South, East, West, Northeast, Northwest, Southeast, Southwest). The compass names indicate the slope direction creating maximum response. (Gradient kernels with zero weighting, that is, the sum of the kernel coefficient is zero, have no output in uniform regions.) The detected edge is orthogonal to the gradient direction.To avoid positional shift, all operating windows are odd number arrays, with the center pixel being the pixel of interest. Extension of the 3 × 3 impulse response arrays to a larger size is not clear cut—different authors suggest different lines of rationale. For example, it may be advantageous to extend the 3-level (Prewitt, 1970) to:

x

g(x)

x

g∂x∂

-----

x

g2∂

x2∂--------

x

g(x)

x

g∂x∂

-----

x

g2∂

x2∂--------

Line Ramp Edge

Original Feature

1st Derivative

2nd Derivative

∂∂x-----

1 1 10 0 01– 1– 1–

= and ∂∂y-----

1 0 1–1 0 1–1 0 1–

=

∂2

∂x2--------

1– 2 1–1– 2 1–1– 2 1–

= ∂2

∂y2--------

1– 1– 1–2 2 21– 1– 1–

=and

Page 565: ERDAS Field Guide

Enhancement 535

or the following might be beneficial:

Larger template arrays provide a greater noise immunity, but are computationally more demanding.

Zero-Sum FiltersA common type of edge detection kernel is a zero-sum filter. For this type of filter, the coefficients are designed to add up to zero. Following are examples of two zero-sum filters:

Prior to edge enhancement, you should reduce speckle noise by using the ERDAS IMAGINE Radar Interpreter Speckle Suppression function.

1 1 0 1– 1–1 1 0 1– 1–1 1 0 1– 1–1 1 0 1– 1–1 1 0 1– 1–

2 1 0 1– 2–2 1 0 1– 2–2 1 0 1– 2–2 1 0 1– 2–2 1 0 1– 2–

or

4 2 0 2– 4–4 2 0 2– 4–4 2 0 2– 4–4 2 0 2– 4–4 2 0 2– 4–

1 0 1–2 0 2–1 0 1–vertical

1– 2– 1–0 0 01 2 1horizontal

Sobel =

1 0 1–1 0 1–1 0 1–vertical

1– 1– 1–0 0 01 1 1horizontal

Prewitt =

Page 566: ERDAS Field Guide

536 Enhancement

2nd-Order Derivatives (Laplacian Operators)The second category of edge enhancers is 2nd-order derivative or Laplacian operators. These are best for line (or spot) detection as distinct from ramp edges. ERDAS IMAGINE Radar Interpreter offers two such arrays:Unweighted line:

Weighted line:

Source: Pratt, 1991

Some researchers have found that a combination of 1st- and 2nd-order derivative images produces the best output. See Eberlein and Weszka (Eberlein and Weszka, 1975) for information about subtracting the 2nd-order derivative (Laplacian) image from the 1st-order derivative image (gradient).

Texture According to Pratt (Pratt, 1991), “Many portions of images of natural scenes are devoid of sharp edges over large areas. In these areas the scene can often be characterized as exhibiting a consistent structure analogous to the texture of cloth. Image texture measurements can be used to segment an image and classify its segments.” As an enhancement, texture is particularly applicable to radar data, although it may be applied to any type of data with varying results. For example, it has been shown (Blom and Daily, 1982) that a three-layer variance image using 15 × 15, 31 × 31, and 61 × 61 windows can be combined into a three-color RGB (red, green, blue) image that is useful for geologic discrimination. The same could apply to a vegetation classification. You could also prepare a three-color image using three different functions operating through the same (or different) size moving window(s). However, each data set and application would need different moving window sizes and/or texture measures to maximize the discrimination.

1– 2 1–1– 2 1–1– 2 1–

1– 2 1–2– 4 2–1– 2 1–

Page 567: ERDAS Field Guide

Enhancement 537

Radar Texture AnalysisWhile texture analysis has been useful in the enhancement of VIS/IR image data, it is showing even greater applicability to radar imagery. In part, this stems from the nature of the imaging process itself.The interaction of the radar waves with the surface of interest is dominated by reflection involving the surface roughness at the wavelength scale. In VIS/IR imaging, the phenomena involved is absorption at the molecular level. Also, as we know from array-type antennae, radar is especially sensitive to regularity that is a multiple of its wavelength. This provides for a more precise method for quantifying the character of texture in a radar return. The ability to use radar data to detect texture and provide topographic information about an image is a major advantage over other types of imagery where texture is not a quantitative characteristic.The texture transforms can be used in several ways to enhance the use of radar imagery. Adding the radar intensity image as an additional layer in a (vegetation) classification is fairly straightforward and may be useful. However, the proper texture image (function and window size) can greatly increase the discrimination. Using known test sites, one can experiment to discern which texture image best aids the classification. For example, the texture image could then be added as an additional layer to the TM bands.

As radar data come into wider use, other mathematical texture definitions may prove useful and will be added to the ERDAS IMAGINE Radar Interpreter. In practice, you interactively decide which algorithm and window size is best for your data and application.

Texture Analysis AlgorithmsWhile texture has typically been a qualitative measure, it can be enhanced with mathematical algorithms. Many algorithms appear in literature for specific applications (Haralick, 1979; Iron and Petersen, 1981). The algorithms incorporated into ERDAS IMAGINE are those which are applicable in a wide variety of situations and are not computationally over-demanding. This later point becomes critical as the moving window size increases. Research has shown that very large moving windows are often needed for proper enhancement. For example, Blom (Blom and Daily, 1982) uses up to a 61 × 61 window. Four algorithms are currently utilized for texture enhancement in ERDAS IMAGINE:

• mean Euclidean distance (1st-order)

Page 568: ERDAS Field Guide

538 Enhancement

• variance (2nd-order)

• skewness (3rd-order)

• kurtosis (4th-order)

Mean Euclidean DistanceThese algorithms are shown below (Iron and Petersen, 1981):

Where:xijl = DN value for spectral band λ and pixel (i,j) of a

multispectral imagexcl = DN value for spectral band λ of a window’s center pixeln = number of pixels in a window

Variance

Where:xij = DN value of pixel (i,j)n = number of pixels in a windowM = Mean of the moving window, where:

Skewness

Mean Euclidean DistanceΣ Σλ xcλ xijλ–( )2 ][

n 1–---------------------------------------------

12---

=

VarianceΣ x( ij M )2–

n 1–----------------------------=

MeanΣxij

n---------=

SkewΣ x( ij M )3–

n 1–( ) V( )32---

-------------------------------=

Page 569: ERDAS Field Guide

Enhancement 539

Where:xij = DN value of pixel (i,j)n = number of pixels in a windowM = Mean of the moving window (see above)V = Variance (see above)

Kurtosis

Where:xij = DN value of pixel (i,j)n = number of pixels in a windowM = Mean of the moving window (see above)V = Variance (see above)

Texture analysis is available from the Texture function in Image Interpreter and from the ERDAS IMAGINE Radar Interpreter Texture Analysis function.

Radiometric Correction: Radar Imagery

The raw radar image frequently contains radiometric errors due to:

• imperfections in the transmit and receive pattern of the radar antenna

• errors due to the coherent pulse (that is, speckle)

• the inherently stronger signal from a near range (closest to the sensor flight path) than a far range (farthest from the sensor flight path) target

Many imaging radar systems use a single antenna that transmits the coherent radar burst and receives the return echo. However, no antenna is perfect; it may have various lobes, dead spots, and imperfections. This causes the received signal to be slightly distorted radiometrically. In addition, range fall-off causes far range targets to be darker (less return signal).

KurtosisΣ x( ij M )4–

n( 1 ) V( )2–-----------------------------=

Page 570: ERDAS Field Guide

540 Enhancement

These two problems can be addressed by adjusting the average brightness of each range line to a constant—usually the average overall scene brightness (Chavez and Berlin, 1986). This requires that each line of constant range be long enough to reasonably approximate the overall scene brightness (see Figure 168). This approach is generic; it is not specific to any particular radar sensor.

The Adjust Brightness function in ERDAS IMAGINE works by correcting each range line average. For this to be a valid approach, the number of data values must be large enough to provide good average values. Be careful not to use too small an image. This depends upon the character of the scene itself.

Figure 168: Adjust Brightness Function

Range Lines/Lines of Constant RangeLines of constant range are not the same thing as range lines:

• Range lines—lines that are perpendicular to the flight of the sensor

• Lines of constant range—lines that are parallel to the flight of the sensor

• Range direction—same as range lines

Because radiometric errors are a function of the imaging geometry, the image must be correctly oriented during the correction process. For the algorithm to correctly address the data set, you must tell ERDAS IMAGINE whether the lines of constant range are in columns or rows in the displayed image. Figure 169 shows the lines of constant range in columns, parallel to the sides of the display screen:

colu

mns

of d

ata

a = average data value of each row

Add the averages of all data rows:

Overall averageax

= calibration coefficient

This subset would not give an accurate average for correcting the entire scene.

a1 + a2 + a3 + a4 ....ax

of line x

rows of data

x= Overall average

Page 571: ERDAS Field Guide

Enhancement 541

Figure 169: Range Lines vs. Lines of Constant Range

Merging Radar with VIS/IR Imagery

As aforementioned, the phenomena involved in radar imaging is quite different from that in VIS/IR imaging. Because these two sensor types give different information about the same target (chemical vs. physical), they are complementary data sets. If the two images are correctly combined, the resultant image conveys both chemical and physical information and could prove more useful than either image alone. The methods for merging radar and VIS/IR data are still experimental and open for exploration. The following methods are suggested for experimentation:

• Codisplaying in a View

• RGB to IHS transforms

• Principal components transform

• Multiplicative

The ultimate goal of enhancement is not mathematical or logical purity; it is feature extraction. There are currently no rules to suggest which options yield the best results for a particular application; you must experiment. The option that proves to be most useful depends upon the data sets (both radar and VIS/IR), your experience, and your final objective.

Range Direction

Flig

ht (A

zim

uth)

Dire

ctio

n

Lines Of Constant Range

Ran

ge L

ines

DisplayScreen

Page 572: ERDAS Field Guide

542 Enhancement

CodisplayingThe simplest and most frequently used method of combining radar with VIS/IR imagery is codisplaying on an RGB color monitor. In this technique, the radar image is displayed with one (typically the red) gun, while the green and blue guns display VIS/IR bands or band ratios. This technique follows from no logical model and does not truly merge the two data sets.

Use the Viewer with the Clear Display option disabled for this type of merge. Select the color guns to display the different layers.

RGB to IHS TransformsAnother common technique uses the RGB to IHS transforms. In this technique, an RGB color composite of bands (or band derivatives, such as ratios) is transformed into IHS color space. The intensity component is replaced by the radar image, and the scene is reverse transformed. This technique integrally merges the two data types.

For more information, see RGB to IHS on page 498.

Principal Components TransformA similar image merge involves utilizing the PC transformation of the VIS/IR image. With this transform, more than three components can be used. These are converted to a series of principal components. The first PC, PC-1, is generally accepted to correlate with overall scene brightness. This value is replaced by the radar image and the reverse transform is applied.

For more information, see Principal Components Analysis on page 492.

MultiplicativeA final method to consider is the multiplicative technique. This requires several chromatic components and a multiplicative component, which is assigned to the image intensity. In practice, the chromatic components are usually band ratios or PCs; the radar image is input multiplicatively as intensity (Croft (Holcomb), 1993).

Page 573: ERDAS Field Guide

Enhancement 543

The two sensor merge models using transforms to integrate the two data sets (PC and RGB to IHS) are based on the assumption that the radar intensity correlates with the intensity that the transform derives from the data inputs. However, the logic of mathematically merging radar with VIS/IR data sets is inherently different from the logic of the SPOT/TM merges (as discussed in Resolution Merge on page 480). It cannot be assumed that the radar intensity is a surrogate for, or equivalent to, the VIS/IR intensity. The acceptability of this assumption depends on the specific case.For example, Landsat TM imagery is often used to aid in mineral exploration. A common display for this purpose is RGB = TM5/TM7, TM5/TM4, TM3/TM1; the logic being that if all three ratios are high, the sites suited for mineral exploration are bright overall. If the target area is accompanied by silicification, which results in an area of dense angular rock, this should be the case. However, if the alteration zone is basaltic rock to kaolinite/alunite, then the radar return could be weaker than the surrounding rock. In this case, radar would not correlate with high 5/7, 5/4, 3/1 intensity and the substitution would not produce the desired results (Croft (Holcomb), 1993).

Page 574: ERDAS Field Guide

544 Enhancement

Page 575: ERDAS Field Guide

545Classification

Classification 545

Classification

Introduction Multispectral classification is the process of sorting pixels into a finite number of individual classes, or categories of data, based on their data file values. If a pixel satisfies a certain set of criteria, the pixel is assigned to the class that corresponds to that criteria. This process is also referred to as image segmentation.Depending on the type of information you want to extract from the original data, classes may be associated with known features on the ground or may simply represent areas that look different to the computer. An example of a classified image is a land cover map, showing vegetation, bare land, pasture, urban, etc.

The Classification Process

Pattern Recognition Pattern recognition is the science—and art—of finding meaningful patterns in data, which can be extracted through classification. By spatially and spectrally enhancing an image, pattern recognition can be performed with the human eye; the human brain automatically sorts certain textures and colors into categories. In a computer system, spectral pattern recognition can be more scientific. Statistics are derived from the spectral characteristics of all pixels in an image. Then, the pixels are sorted based on mathematical criteria. The classification process breaks down into two parts: training and classifying (using a decision rule).

Training First, the computer system must be trained to recognize patterns in the data. Training is the process of defining the criteria by which these patterns are recognized (Hord, 1982). Training can be performed with either a supervised or an unsupervised method, as explained below.

Supervised TrainingSupervised training is closely controlled by the analyst. In this process, you select pixels that represent patterns or land cover features that you recognize, or that you can identify with help from other sources, such as aerial photos, ground truth data, or maps. Knowledge of the data, and of the classes desired, is required before classification. By identifying patterns, you can instruct the computer system to identify pixels with similar characteristics. If the classification is accurate, the resulting classes represent the categories within the data that you originally identified.

Page 576: ERDAS Field Guide

546 Classification

Unsupervised TrainingUnsupervised training is more computer-automated. It enables you to specify some parameters that the computer uses to uncover statistical patterns that are inherent in the data. These patterns do not necessarily correspond to directly meaningful characteristics of the scene, such as contiguous, easily recognized areas of a particular soil type or land use. They are simply clusters of pixels with similar spectral characteristics. In some cases, it may be more important to identify groups of pixels with similar spectral characteristics than it is to sort pixels into recognizable categories. Unsupervised training is dependent upon the data itself for the definition of classes. This method is usually used when less is known about the data before classification. It is then the analyst’s responsibility, after classification, to attach meaning to the resulting classes (Jensen, 1996). Unsupervised classification is useful only if the classes can be appropriately interpreted.

Signatures The result of training is a set of signatures that defines a training sample or cluster. Each signature corresponds to a class, and is used with a decision rule (explained below) to assign the pixels in the image file to a class. Signatures in ERDAS IMAGINE can be parametric or nonparametric.A parametric signature is based on statistical parameters (e.g., mean and covariance matrix) of the pixels that are in the training sample or cluster. Supervised and unsupervised training can generate parametric signatures. A set of parametric signatures can be used to train a statistically-based classifier (e.g., maximum likelihood) to define the classes.A nonparametric signature is not based on statistics, but on discrete objects (polygons or rectangles) in a feature space image. These feature space objects are used to define the boundaries for the classes. A nonparametric classifier uses a set of nonparametric signatures to assign pixels to a class based on their location either inside or outside the area in the feature space image. Supervised training is used to generate nonparametric signatures (Kloer, 1994). ERDAS IMAGINE enables you to generate statistics for a nonparametric signature. This function allows a feature space object to be used to create a parametric signature from the image being classified. However, since a parametric classifier requires a normal distribution of data, the only feature space object for which this would be mathematically valid would be an ellipse (Kloer, 1994). When both parametric and nonparametric signatures are used to classify an image, you are more able to analyze and visualize the class definitions than either type of signature provides independently (Kloer, 1994).

Page 577: ERDAS Field Guide

Classification 547

See "Math Topics" on page 697 for information on feature space images and how they are created.

Decision Rule After the signatures are defined, the pixels of the image are sorted into classes based on the signatures by use of a classification decision rule. The decision rule is a mathematical algorithm that, using data contained in the signature, performs the actual sorting of pixels into distinct class values.

Parametric Decision RuleA parametric decision rule is trained by the parametric signatures. These signatures are defined by the mean vector and covariance matrix for the data file values of the pixels in the signatures. When a parametric decision rule is used, every pixel is assigned to a class since the parametric decision space is continuous (Kloer, 1994).

Nonparametric Decision RuleA nonparametric decision rule is not based on statistics; therefore, it is independent of the properties of the data. If a pixel is located within the boundary of a nonparametric signature, then this decision rule assigns the pixel to the signature’s class. Basically, a nonparametric decision rule determines whether or not the pixel is located inside of nonparametric signature boundary.

Output File When classifying an image file, the output file is an image file with a thematic raster layer. This file automatically contains the following data:

• class values

• class names

• color table

• statistics

• histogram

The image file also contains any signature attributes that were selected in the ERDAS IMAGINE Supervised Classification utility.

The class names, values, and colors can be set with the Signature Editor or the Raster Attribute Editor.

Page 578: ERDAS Field Guide

548 Classification

Classification Tips

Classification Scheme Usually, classification is performed with a set of target classes in mind. Such a set is called a classification scheme (or classification system). The purpose of such a scheme is to provide a framework for organizing and categorizing the information that can be extracted from the data (Jensen et al, 1983). The proper classification scheme includes classes that are both important to the study and discernible from the data on hand. Most schemes have a hierarchical structure, which can describe a study area in several levels of detail. A number of classification schemes have been developed by specialists who have inventoried a geographic region. Some references for professionally-developed schemes are listed below:

• Anderson, J.R., et al. 1976. “A Land Use and Land Cover Classification System for Use with Remote Sensor Data.” U.S. Geological Survey Professional Paper 964.

• Cowardin, Lewis M., et al. 1979. Classification of Wetlands and Deepwater Habitats of the United States. Washington, D.C.: U.S. Fish and Wildlife Service.

• Florida Topographic Bureau, Thematic Mapping Section. 1985. Florida Land Use, Cover and Forms Classification System. Florida Department of Transportation, Procedure No. 550-010-001-a.

• Michigan Land Use Classification and Reference Committee. 1975. Michigan Land Cover/Use Classification System. Lansing, Michigan: State of Michigan Office of Land Use.

Other states or government agencies may also have specialized land use/cover studies. It is recommended that the classification process is begun by defining a classification scheme for the application, using previously developed schemes, like those above, as a general framework.

Iterative Classification A process is iterative when it repeats an action. The objective of the ERDAS IMAGINE system is to enable you to iteratively create and refine signatures and classified image files to arrive at a desired final classification. The ERDAS IMAGINE classification utilities are tools to be used as needed, not a numbered list of steps that must always be followed in order.The total classification can be achieved with either the supervised or unsupervised methods, or a combination of both. Some examples are below:

• Signatures created from both supervised and unsupervised training can be merged and appended together.

Page 579: ERDAS Field Guide

Classification 549

• Signature evaluation tools can be used to indicate which signatures are spectrally similar. This helps to determine which signatures should be merged or deleted. These tools also help define optimum band combinations for classification. Using the optimum band combination may reduce the time required to run a classification process.

• Since classifications (supervised or unsupervised) can be based on a particular area of interest (either defined in a raster layer or an .aoi layer), signatures and classifications can be generated from previous classification results.

Supervised vs. Unsupervised Training

In supervised training, it is important to have a set of desired classes in mind, and then create the appropriate signatures from the data. You must also have some way of recognizing pixels that represent the classes that you want to extract. Supervised classification is usually appropriate when you want to identify relatively few classes, when you have selected training sites that can be verified with ground truth data, or when you can identify distinct, homogeneous regions that represent each class. On the other hand, if you want the classes to be determined by spectral distinctions that are inherent in the data so that you can define the classes later, then the application is better suited to unsupervised training. Unsupervised training enables you to define many classes easily, and identify classes that are not in contiguous, easily recognized regions.

NOTE: Supervised classification also includes using a set of classes that is generated from an unsupervised classification. Using a combination of supervised and unsupervised classification may yield optimum results, especially with large data sets (e.g., multiple Landsat scenes). For example, unsupervised classification may be useful for generating a basic set of classes, then supervised classification can be used for further definition of the classes.

Classifying Enhanced Data

For many specialized applications, classifying data that have been merged, spectrally merged or enhanced—with principal components, image algebra, or other transformations—can produce very specific and meaningful results. However, without understanding the data and the enhancements used, it is recommended that only the original, remotely-sensed data be classified.

Dimensionality Dimensionality refers to the number of layers being classified. For example, a data file with 3 layers is said to be 3-dimensional, since 3-dimensional feature space is plotted to analyze the data.

Feature space and dimensionality are discussed in "Math Topics" on page 697.

Page 580: ERDAS Field Guide

550 Classification

Adding DimensionsUsing programs in ERDAS IMAGINE, you can add layers to existing image files. Therefore, you can incorporate data (called ancillary data) other than remotely-sensed data into the classification. Using ancillary data enables you to incorporate variables into the classification from, for example, vector layers, previously classified data, or elevation data. The data file values of the ancillary data become an additional feature of each pixel, thus influencing the classification (Jensen, 1996).

Limiting DimensionsAlthough ERDAS IMAGINE allows an unlimited number of layers of data to be used for one classification, it is usually wise to reduce the dimensionality of the data as much as possible. Often, certain layers of data are redundant or extraneous to the task at hand. Unnecessary data take up valuable disk space, and cause the computer system to perform more arduous calculations, which slows down processing.

Use the Signature Editor to evaluate separability to calculate the best subset of layer combinations. Use the Image Interpreter functions to merge or subset layers. Use the Image Information tool (on the Viewer’s tool bar) to delete a layer(s).

Supervised Training

Supervised training requires a priori (already known) information about the data, such as:

• What type of classes need to be extracted? Soil type? Land use? Vegetation?

• What classes are most likely to be present in the data? That is, which types of land cover, soil, or vegetation (or whatever) are represented by the data?

In supervised training, you rely on your own pattern recognition skills and a priori knowledge of the data to help the system determine the statistical criteria (signatures) for data classification. To select reliable samples, you should know some information—either spatial or spectral—about the pixels that you want to classify.

Page 581: ERDAS Field Guide

Classification 551

The location of a specific characteristic, such as a land cover type, may be known through ground truthing. Ground truthing refers to the acquisition of knowledge about the study area from field work, analysis of aerial photography, personal experience, etc. Ground truth data are considered to be the most accurate (true) data available about the area of study. They should be collected at the same time as the remotely sensed data, so that the data correspond as much as possible (Star and Estes, 1990). However, some ground data may not be very accurate due to a number of errors and inaccuracies.

Training Samples and Feature Space Objects

Training samples (also called samples) are sets of pixels that represent what is recognized as a discernible pattern, or potential class. The system calculates statistics from the sample pixels to create a parametric signature for the class. The following terms are sometimes used interchangeably in reference to training samples. For clarity, they are used in this documentation as follows:

• Training sample, or sample, is a set of pixels selected to represent a potential class. The data file values for these pixels are used to generate a parametric signature.

• Training field, or training site, is the geographical AOI in the image represented by the pixels in a sample. Usually, it is previously identified with the use of ground truth data.

Feature space objects are user-defined AOIs in a feature space image. The feature space signature is based on these objects.

Selecting Training Samples

It is important that training samples be representative of the class that you are trying to identify. This does not necessarily mean that they must contain a large number of pixels or be dispersed across a wide region of the data. The selection of training samples depends largely upon your knowledge of the data, of the study area, and of the classes that you want to extract. ERDAS IMAGINE enables you to identify training samples using one or more of the following methods:

• using a vector layer

• defining a polygon in the image

• identifying a training sample of contiguous pixels with similar spectral characteristics

• identifying a training sample of contiguous pixels within a certain area, with or without similar spectral characteristics

Page 582: ERDAS Field Guide

552 Classification

• using a class from a thematic raster layer from an image file of the same area (i.e., the result of an unsupervised classification)

Digitized PolygonTraining samples can be identified by their geographical location (training sites, using maps, ground truth data). The locations of the training sites can be digitized from maps with the ERDAS IMAGINE Vector or AOI tools. Polygons representing these areas are then stored as vector layers. The vector layers can then be used as input to the AOI tools and used as training samples to create signatures.

Use the Vector and AOI tools to digitize training samples from a map. Use the Signature Editor to create signatures from training samples that are identified with digitized polygons.

User-defined PolygonUsing your pattern recognition skills (with or without supplemental ground truth information), you can identify samples by examining a displayed image of the data and drawing a polygon around the training site(s) of interest. For example, if it is known that oak trees reflect certain frequencies of green and infrared light according to ground truth data, you may be able to base your sample selections on the data (taking atmospheric conditions, sun angle, time, date, and other variations into account). The area within the polygon(s) would be used to create a signature.

Use the AOI tools to define the polygon(s) to be used as the training sample. Use the Signature Editor to create signatures from training samples that are identified with the polygons.

Identify Seed PixelWith the Seed Properties dialog and AOI tools, the cursor (crosshair) can be used to identify a single pixel (seed pixel) that is representative of the training sample. This seed pixel is used as a model pixel, against which the pixels that are contiguous to it are compared based on parameters specified by you. When one or more of the contiguous pixels is accepted, the mean of the sample is calculated from the accepted pixels. Then, the pixels contiguous to the sample are compared in the same way. This process repeats until no pixels that are contiguous to the sample satisfy the spectral parameters. In effect, the sample grows outward from the model pixel with each iteration. These homogenous pixels are converted from individual raster pixels to a polygon and used as an AOI layer.

Page 583: ERDAS Field Guide

Classification 553

Select the Seed Properties option in the Viewer to identify training samples with a seed pixel.

Seed Pixel Method with Spatial LimitsThe training sample identified with the seed pixel method can be limited to a particular region by defining the geographic distance and area.

Vector layers (polygons or lines) can be displayed as the top layer in the Viewer, and theboundaries can then be used as an AOI for training samples defined under Seed Properties.

Thematic Raster LayerA training sample can be defined by using class values from a thematic raster layer (see Table 51). The data file values in the training sample are used to create a signature. The training sample can be defined by as many class values as desired.

NOTE: The thematic raster layer must have the same coordinate system as the image file being classified.

Table 51: Training Sample Comparison

Method Advantages Disadvantages

Digitized Polygon precise map coordinates, represents known ground information

may overestimate class variance, time-consuming

User-defined Polygon high degree of user control

may overestimate class variance, time-consuming

Seed Pixel auto-assisted, less time may underestimate class variance

Thematic Raster Layer allows iterative classifying

must have previously defined thematic layer

Page 584: ERDAS Field Guide

554 Classification

Evaluating Training Samples

Selecting training samples is often an iterative process. To generate signatures that accurately represent the classes to be identified, you may have to repeatedly select training samples, evaluate the signatures that are generated from the samples, and then either take new samples or manipulate the signatures as necessary. Signature manipulation may involve merging, deleting, or appending from one file to another. It is also possible to perform a the known signatures, then mask out areas that are not classified to use in gathering more signatures.

See Evaluating Signatures on page 565 for methods of determining the accuracy of the signatures created from your training samples.

Selecting Feature Space Objects

The ERDAS IMAGINE Feature Space tools enable you to interactively define feature space objects (AOIs) in the feature space image(s). A feature space image is simply a graph of the data file values of one band of data against the values of another band (often called a scatterplot). In ERDAS IMAGINE, a feature space image has the same data structure as a raster image; therefore, feature space images can be used with other ERDAS IMAGINE utilities, including zoom, color level slicing, virtual roam, Spatial Modeler, and Map Composer.

Figure 170: Example of a Feature Space Image

The transformation of a multilayer raster image into a feature space image is done by mapping the input pixel values to a position in the feature space image. This transformation defines only the pixel position in the feature space image. It does not define the pixel’s value. The pixel values in the feature space image can be the accumulated frequency, which is calculated when the feature space image is defined. The pixel values can also be provided by a thematic raster layer of the same geometry as the source multilayer image. Mapping a thematic layer into a feature space image can be useful for evaluating the validity of the parametric and nonparametric decision boundaries of a classification (Kloer, 1994).

band

2

band 1

Page 585: ERDAS Field Guide

Classification 555

When you display a feature space image file (.fsp.img) in a Viewer, the colors reflect the density of points for both bands. The bright tones represent a high density and the dark tones represent a low density.

Create Nonparametric SignatureYou can define a feature space object (AOI) in the feature space image and use it directly as a nonparametric signature. Since the Viewers for the feature space image and the image being classified are both linked to the ERDAS IMAGINE Signature Editor, it is possible to mask AOIs from the image being classified to the feature space image, and vice versa. You can also directly link a cursor in the image Viewer to the feature space Viewer. These functions help determine a location for the AOI in the feature space image.A single feature space image, but multiple AOIs, can be used to define the signature. This signature is taken within the feature space image, not the image being classified. The pixels in the image that correspond to the data file values in the signature (i.e., feature space object) are assigned to that class. One fundamental difference between using the feature space image to define a training sample and the other traditional methods is that it is a nonparametric signature. The decisions made in the classification process have no dependency on the statistics of the pixels. This helps improve classification accuracies for specific nonnormal classes, such as urban and exposed rock (Faust et al, 1991).

See Feature Space Images on page 708 for more information.

Page 586: ERDAS Field Guide

556 Classification

Figure 171: Process for Defining a Feature Space Object

Evaluate Feature Space SignaturesUsing the Feature Space tools, it is also possible to use a feature space signature to generate a mask. Once it is defined as a mask, the pixels under the mask are identified in the image file and highlighted in the Viewer. The image displayed in the Viewer must be the image from which the feature space image was created. This process helps you to visually analyze the correlations between various spectral bands to determine which combination of bands brings out the desired features in the image. You can have as many feature space images with different band combinations as desired. Any polygon or rectangle in these feature space images can be used as a nonparametric signature. However, only one feature space image can be used per signature. The polygons in the feature space image can be easily modified and/or masked until the desired regions of the image have been identified.

Use the Feature Space tools in the Signature Editor to create a feature space image and mask the signature. Use the AOI tools to draw polygons.

Display the image file to be classified in a Viewer

Create feature space image from the image file being c

(layers 3, 2, 1).

(layer 1 vs. layer 2).

Draw an AOI (feature space object around the desired area in the feature space image.Once you have a desired AOI, it can be used as a signature.A decision rule is used to analyze each pixel in the image file being classified, and the pixels with the corresponding data file values are assigned

Page 587: ERDAS Field Guide

Classification 557

Unsupervised Training

Unsupervised training requires only minimal initial input from you. However, you have the task of interpreting the classes that are created by the unsupervised training algorithm. Unsupervised training is also called clustering, because it is based on the natural groupings of pixels in image data when they are plotted in feature space. According to the specified parameters, these groups can later be merged, disregarded, otherwise manipulated, or used as the basis of a signature.

See Feature Space on page 708 for more information.

ClustersClusters are defined with a clustering algorithm, which often uses all or many of the pixels in the input data file for its analysis. The clustering algorithm has no regard for the contiguity of the pixels that define each cluster.

• The Iterative Self-Organizing Data Analysis Technique (ISODATA) (Tou and Gonzalez, 1974) clustering method uses spectral distance as in the sequential method, but iteratively classifies the pixels, redefines the criteria for each class, and classifies again, so that the spectral distance patterns in the data gradually emerge.

• The RGB clustering method is more specialized than the ISODATA method. It applies to three-band, 8-bit data. RGB clustering plots pixels in three-dimensional feature space, and divides that space into sections that are used to define clusters.

Each of these methods is explained below, along with its advantages and disadvantages.

Advantages Disadvantages

Provide an accurate way to classify a class with a nonnormal distribution (e.g., residential and urban).

The classification decision process allows overlap and unclassified pixels.

Certain features may be more visually identifiable in a feature space image.

The feature space image may be difficult to interpret.

The classification decision process is fast.

Page 588: ERDAS Field Guide

558 Classification

Some of the statistics terms used in this section are explained in "Math Topics" on page 697.

ISODATA Clustering ISODATA is iterative in that it repeatedly performs an entire classification (outputting a thematic raster layer) and recalculates statistics. Self-Organizing refers to the way in which it locates clusters with minimum user input. The ISODATA method uses minimum spectral distance to assign a cluster for each candidate pixel. The process begins with a specified number of arbitrary cluster means or the means of existing signatures, and then it processes repetitively, so that those means shift to the means of the clusters in the data. Because the ISODATA method is iterative, it is not biased to the top of the data file, as are the one-pass clustering algorithms.

Use the Unsupervised Classification utility in the Signature Editor to perform ISODATA clustering.

ISODATA Clustering ParametersTo perform ISODATA clustering, you specify:

• N - the maximum number of clusters to be considered. Since each cluster is the basis for a class, this number becomes the maximum number of classes to be formed. The ISODATA process begins by determining N arbitrary cluster means. Some clusters with too few pixels can be eliminated, leaving less than N clusters.

• T - a convergence threshold, which is the maximum percentage of pixels whose class values are allowed to be unchanged between iterations.

• M - the maximum number of iterations to be performed.

Initial Cluster MeansOn the first iteration of the ISODATA algorithm, the means of N clusters can be arbitrarily determined. After each iteration, a new mean for each cluster is calculated, based on the actual spectral locations of the pixels in the cluster, instead of the initial arbitrary calculation. Then, these new means are used for defining clusters in the next iteration. The process continues until there is little change between iterations (Swain, 1973). The initial cluster means are distributed in feature space along a vector that runs between the point at spectral coordinates

(μ1-σ1, μ2-σ2, μ3-σ3, ... μn-σn)

Page 589: ERDAS Field Guide

Classification 559

and the coordinates (μ1+σ1, μ2+σ2, μ3+σ3, ... μn+σn)

Such a vector in two dimensions is illustrated in Figure 172. The initial cluster means are evenly distributed between

(μA-σA, μB-σB) and (μA+σA, μB+σB)

Figure 172: ISODATA Arbitrary Clusters

Pixel AnalysisPixels are analyzed beginning with the upper left corner of the image and going left to right, block by block. The spectral distance between the candidate pixel and each cluster mean is calculated. The pixel is assigned to the cluster whose mean is the closest. The ISODATA function creates an output image file with a thematic raster layer and/or a signature file (.sig) as a result of the clustering. At the end of each iteration, an image file exists that shows the assignments of the pixels to the clusters. Considering the regular, arbitrary assignment of the initial cluster means, the first iteration of the ISODATA algorithm always gives results similar to those in Figure 173.

ISODATA Arbitrary Clusters

μA

0

B

5 arbitrary cluster means in two-dimensional spectral space

σ+

μ

Bσ-

B

Aσ-

Aσ+0

Band Adata file values

Ban

d B

data

file

val

ues

μμ

μ

μB

A A

B

Page 590: ERDAS Field Guide

560 Classification

Figure 173: ISODATA First Pass

For the second iteration, the means of all clusters are recalculated, causing them to shift in feature space. The entire process is repeated—each candidate pixel is compared to the new cluster means and assigned to the closest cluster mean.

Figure 174: ISODATA Second Pass

Percentage UnchangedAfter each iteration, the normalized percentage of pixels whose assignments are unchanged since the last iteration is displayed in the dialog. When this number reaches T (the convergence threshold), the program terminates. It is possible for the percentage of unchanged pixels to never converge or reach T (the convergence threshold). Therefore, it may be beneficial to monitor the percentage, or specify a reasonable maximum number of iterations, M, so that the program does not run indefinitely.

Cluster1

Cluster2

Cluster3

Cluster4

Cluster5

Band Adata file values

Ban

d B

data

file

val

ues

Band Adata file values

Ban

d B

data

file

val

ues

Page 591: ERDAS Field Guide

Classification 561

Principal Component MethodWhereas clustering creates signatures depending on pixels’ spectral reflectance by adding pixels together, the principal component method actually subtracts pixels. Principal Components Analysis (PCA) is a method of data compression. With it, you can eliminate data that is redundant by compacting it into fewer bands. The resulting bands are noncorrelated and independent. You may find these bands more interpretable than the source data. PCA can be performed on up to 256 bands with ERDAS IMAGINE. As a type of spectral enhancement, you are required to specify the number of components you want output from the original data.

Recommended Decision RuleAlthough the ISODATA algorithm is the most similar to the minimum distance decision rule, the signatures can produce good results with any type of classification. Therefore, no particular decision rule is recommended over others. In most cases, the signatures created by ISODATA are merged, deleted, or appended to other signature sets. The image file created by ISODATA is the same as the image file that is created by a minimum distance classification, except for the nonconvergent pixels (100-T% of the pixels).

Advantages Disadvantages

Because it is iterative, clustering is not geographically biased to the top or bottom pixels of the data file.

The clustering process is time-consuming, because it can repeat many times.

This algorithm is highly successful at finding the spectral clusters that are inherent in the data. It does not matter where the initial arbitrary cluster means are located, as long as enough iterations are allowed.

Does not account for pixel spatial homogeneity.

A preliminary thematic raster layer is created, which gives results similar to using a minimum distance classifier (as explained below) on the signatures that are created. This thematic raster layer can be used for analyzing and manipulating the signatures before actual classification takes place.

Page 592: ERDAS Field Guide

562 Classification

Use the Merge and Delete options in the Signature Editor to manipulate signatures.

Use the Unsupervised Classification utility in the Signature Editor to perform ISODATA clustering, generate signatures, and classify the resulting signatures.

RGB Clustering

The RGB Clustering and Advanced RGB Clustering functions in Image Interpreter create a thematic raster layer. However, no signature file is created and no other classification decision rule is used. In practice, RGB Clustering differs greatly from the other clustering methods, but it does employ a clustering algorithm.

RGB clustering is a simple classification and data compression technique for three bands of data. It is a fast and simple algorithm that quickly compresses a three-band image into a single band pseudocolor image, without necessarily classifying any particular features.The algorithm plots all pixels in 3-dimensional feature space and then partitions this space into clusters on a grid. In the more simplistic version of this function, each of these clusters becomes a class in the output thematic raster layer. The advanced version requires that a minimum threshold on the clusters be set so that only clusters at least as large as the threshold become output classes. This allows for more color variation in the output file. Pixels that do not fall into any of the remaining clusters are assigned to the cluster with the smallest city-block distance from the pixel. In this case, the city-block distance is calculated as the sum of the distances in the red, green, and blue directions in 3-dimensional space.Along each axis of the three-dimensional scatterplot, each input histogram is scaled so that the partitions divide the histograms between specified limits—either a specified number of standard deviations above and below the mean, or between the minimum and maximum data values for each band. The default number of divisions per band is listed below:

• Red is divided into 7 sections (32 for advanced version)

• Green is divided into 6 sections (32 for advanced version)

• Blue is divided into 6 sections (32 for advanced version)

Page 593: ERDAS Field Guide

Classification 563

Figure 175: RGB Clustering

Partitioning ParametersIt is necessary to specify the number of R, G, and B sections in each dimension of the 3-dimensional scatterplot. The number of sections should vary according to the histograms of each band. Broad histograms should be divided into more sections, and narrow histograms should be divided into fewer sections (see Figure 175).

It is possible to interactively change these parameters in the RGB Clustering function in the Image Interpreter. The number of classes is calculated based on the current parameters, and it displays on the command screen.

G

B

R

16

195

35

255

98

B

R

G

16

1634

55

35

016

35 195 25598

R

G

B

This cluster contains pixelsbetween 16 and 34 in RED,and between 35 and 55 inGREEN, and between 0 and16 in BLUE.

freq

uenc

y

input histograms

0

0

Advantages Disadvantages

The fastest classification method. It is designed to provide a fast, simple classification for applications that do not require specific classes.

Exactly three bands must be input, which is not suitable for all applications.

Not biased to the top or bottom of the data file. The order in which the pixels are examined does not influence the outcome.

Does not always create thematic classes that can be analyzed for informational purposes.

Page 594: ERDAS Field Guide

564 Classification

TipsSome starting values that usually produce good results with the simple RGB clustering are:

R = 7G = 6B = 6

which results in 7 × 6 × 6 = 252 classes.To decrease the number of output colors/classes or to darken the output, decrease these values.For the Advanced RGB clustering function, start with higher values for R, G, and B. Adjust by raising the threshold parameter and/or decreasing the R, G, and B parameter values until the desired number of output classes is obtained.

Signature Files A signature is a set of data that defines a training sample, feature space object (AOI), or cluster. The signature is used in a classification process. Each classification decision rule (algorithm) requires some signature attributes as input—these are stored in the signature file (.sig). Signatures in ERDAS IMAGINE can be parametric or nonparametric. The following attributes are standard for all signatures (parametric and nonparametric):

• name—identifies the signature and is used as the class name in the output thematic raster layer. The default signature name is Class <number>.

• color—the color for the signature and the color for the class in the output thematic raster layer. This color is also used with other signature visualization functions, such as alarms, masking, ellipses, etc.

• value—the output class value for the signature. The output class value does not necessarily need to be the class number of the signature. This value should be a positive integer.

(Advanced version only) A highly interactive function, allowing an iterative adjustment of the parameters until the number of clusters and the thresholds are satisfactory for analysis.

Advantages Disadvantages

Page 595: ERDAS Field Guide

Classification 565

• order—the order to process the signatures for order-dependent processes, such as signature alarms and parallelepiped classifications.

• parallelepiped limits—the limits used in the parallelepiped classification.

Parametric SignatureA parametric signature is based on statistical parameters (e.g., mean and covariance matrix) of the pixels that are in the training sample or cluster. A parametric signature includes the following attributes in addition to the standard attributes for signatures:

• the number of bands in the input image (as processed by the training program)

• the minimum and maximum data file value in each band for each sample or cluster (minimum vector and maximum vector)

• the mean data file value in each band for each sample or cluster (mean vector)

• the covariance matrix for each sample or cluster

• the number of pixels in the sample or cluster

Nonparametric SignatureA nonparametric signature is based on an AOI that you define in the feature space image for the image file being classified. A nonparametric classifier uses a set of nonparametric signatures to assign pixels to a class based on their location, either inside or outside the area in the feature space image.

The format of the .sig file is described in the On-Line Help. Information on these statistics can be found in "Math Topics" on page 697.

Evaluating Signatures

Once signatures are created, they can be evaluated, deleted, renamed, and merged with signatures from other files. Merging signatures enables you to perform complex classifications with signatures that are derived from more than one training method (supervised and/or unsupervised, parametric and/or nonparametric).

Page 596: ERDAS Field Guide

566 Classification

Use the Signature Editor to view the contents of each signature, manipulate signatures, and perform your own mathematical tests on the statistics.

Using Signature DataThere are tests to perform that can help determine whether the signature data are a true representation of the pixels to be classified for each class. You can evaluate signatures that were created either from supervised or unsupervised training. The evaluation methods in ERDAS IMAGINE include:

• Alarm—using your own pattern recognition ability, you view the estimated classified area for a signature (using the parallelepiped decision rule) against a display of the original image.

• Ellipse—view ellipse diagrams and scatterplots of data file values for every pair of bands.

• Contingency matrix—do a quick classification of the pixels in a set of training samples to see what percentage of the sample pixels are actually classified as expected. These percentages are presented in a contingency matrix. This method is for supervised training only, for which polygons of the training samples exist.

• Divergence—measure the divergence (statistical distance) between signatures and determine band subsets that maximize the classification.

• Statistics and histograms—analyze statistics and histograms of the signatures to make evaluations and comparisons.

NOTE: If the signature is nonparametric (i.e., a feature space signature), you can use only the alarm evaluation method.

After analyzing the signatures, it would be beneficial to merge or delete them, eliminate redundant bands from the data, add new bands of data, or perform any other operations to improve the classification.

Alarm The alarm evaluation enables you to compare an estimated classification of one or more signatures against the original data, as it appears in the Viewer. According to the parallelepiped decision rule, the pixels that fit the classification criteria are highlighted in the displayed image. You also have the option to indicate an overlap by having it appear in a different color.With this test, you can use your own pattern recognition skills, or some ground truth data, to determine the accuracy of a signature.

Page 597: ERDAS Field Guide

Classification 567

Use the Signature Alarm utility in the Signature Editor to perform n-dimensional alarms on the image in the Viewer, using the parallelepiped decision rule. The alarm utility creates a functional layer, and the Viewer allows you to toggle between the image layer and the functional layer.

Ellipse In this evaluation, ellipses of concentration are calculated with the means and standard deviations stored in the signature file. It is also possible to generate parallelepiped rectangles, means, and labels. In this evaluation, the mean and the standard deviation of every signature are used to represent the ellipse in 2-dimensional feature space. The ellipse is displayed in a feature space image.

Ellipses are explained and illustrated in Feature Space Images on page 708 under the discussion of Scatterplots.

When the ellipses in the feature space image show extensive overlap, then the spectral characteristics of the pixels represented by the signatures cannot be distinguished in the two bands that are graphed. In the best case, there is no overlap. Some overlap, however, is expected. Figure 176 shows how ellipses are plotted and how they can overlap. The first graph shows how the ellipses are plotted based on the range of 2 standard deviations from the mean. This range can be altered, changing the ellipse plots. Analyzing the plots with differing numbers of standard deviations is useful for determining the limits of a parallelepiped classification.

Page 598: ERDAS Field Guide

568 Classification

Figure 176: Ellipse Evaluation of Signatures

By analyzing the ellipse graphs for all band pairs, you can determine which signatures and which bands provide accurate classification results.

Use the Signature Editor to create a feature space image and to view an ellipse(s) of signature data.

Contingency Matrix NOTE: This evaluation classifies all of the pixels in the selected AOIs and compares the results to the pixels of a training sample.

The pixels of each training sample are not always so homogeneous that every pixel in a sample is actually classified to its corresponding class. Each sample pixel only weights the statistics that determine the classes. However, if the signature statistics for each sample are distinct from those of the other samples, then a high percentage of each sample’s pixels is classified as expected. In this evaluation, a quick classification of the sample pixels is performed using the minimum distance, maximum likelihood, or Mahalanobis distance decision rule. Then, a contingency matrix is presented, which contains the number and percentages of pixels that are classified as expected.

Use the Signature Editor to perform the contingency matrix evaluation.

Signature Overlap Distinct Signatures

μ A1

Band Adata file values

Ba

nd

Bd

ata

file

va

lue

s

μA

2

+2μ B2 s

-2μ B2

μB2

signature 2signature 1

= mean in Band A for signature 1,μA2= mean in Band A for signature 2, etc.

Band Cdata file values

Ba

nd

Dd

ata

file

va

lue

s

μC2

D2μsignature 2

signature 1D1μ

μC1

s

-2s

μA

2

μA

2+

2s

μA2 = mean in Band A for signature 2, μB2 = mean in Band B for signature 2, etc.

μB2+2s

μB2-2s

μB2

μ A2+

2s

μ A2-

2s μ A2

μD1

μD2

μC2 μC1

Page 599: ERDAS Field Guide

Classification 569

Separability Signature separability is a statistical measure of distance between two signatures. Separability can be calculated for any combination of bands that is used in the classification, enabling you to rule out any bands that are not useful in the results of the classification. For the distance (Euclidean) evaluation, the spectral distance between the mean vectors of each pair of signatures is computed. If the spectral distance between two samples is not significant for any pair of bands, then they may not be distinct enough to produce a successful classification. The spectral distance is also the basis of the minimum distance classification (as explained below). Therefore, computing the distances between signatures helps you predict the results of a minimum distance classification.

Use the Signature Editor to compute signature separability and distance and automatically generate the report.

The formulas used to calculate separability are related to the maximum likelihood decision rule. Therefore, evaluating signature separability helps you predict the results of a maximum likelihood classification. The maximum likelihood decision rule is explained below.There are three options for calculating the separability. All of these formulas take into account the covariances of the signatures in the bands being compared, as well as the mean vectors of the signatures.

Refer to "Math Topics" on page 697 for information on the mean vector and covariance matrix.

DivergenceThe formula for computing Divergence (Dij) is as follows:

Where:i and j= the two signatures (classes) being comparedCi = the covariance matrix of signature iμi = the mean vector of signature itr = the trace function (matrix algebra)T = the transposition function

Source: Swain and Davis, 1978

Dij12---tr Ci Cj–( ) Ci

1– Cj1––( )( ) 1

2---tr Ci

1– Cj1––( ) μi μj–( ) μi μj–( )T( )+=

Page 600: ERDAS Field Guide

570 Classification

Transformed DivergenceThe formula for computing Transformed Divergence (TD) is as follows:

Where:i and j= the two signatures (classes) being comparedCi = the covariance matrix of signature iμi = the mean vector of signature itr = the trace function (matrix algebra)T = the transposition function

Source: Swain and Davis, 1978According to Jensen, the transformed divergence “gives an exponentially decreasing weight to increasing distances between the classes.” The scale of the divergence values can range from 0 to 2,000. Interpreting your results after applying transformed divergence requires you to analyze those numerical divergence values. As a general rule, if the result is greater than 1,900, then the classes can be separated. Between 1,700 and 1,900, the separation is fairly good. Below 1,700, the separation is poor (Jensen, 1996).

Jeffries-Matusita Distance The formula for computing Jeffries-Matusita Distance (JM) is as follows:

Where:i and j = the two signatures (classes) being compared Ci = the covariance matrix of signature i μi = the mean vector of signature i ln = the natural logarithm function|Ci| = the determinant of Ci (matrix algebra)

Source: Swain and Davis, 1978

Dij12---tr Ci Cj–( ) Ci

1– Cj1––( )( ) 1

2---tr Ci

1– Cj1––( ) μi μj–( ) μi μj–( )T( )+=

TDij 2000 1 expD– ij8

----------⎝ ⎠⎛ ⎞–⎝ ⎠

⎛ ⎞=

α 18--- μi μj–( )T Ci Cj+

2-----------------⎝ ⎠

⎛ ⎞1–

μi μj–( ) 12---ln

Ci Cj+( ) 2⁄

Ci Cj×--------------------------------

⎝ ⎠⎜ ⎟⎛ ⎞

+=

JMij 2 1 e α––( )=

Page 601: ERDAS Field Guide

Classification 571

According to Jensen, “The JM distance has a saturating behavior with increasing class separation like transformed divergence. However, it is not as computationally efficient as transformed divergence” (Jensen, 1996).

Separability Both transformed divergence and Jeffries-Matusita distance have upper and lower bounds. If the calculated divergence is equal to the appropriate upper bound, then the signatures can be said to be totally separable in the bands being studied. A calculated divergence of zero means that the signatures are inseparable.

• TD is between 0 and 2000.

• JM is between 0 and 1414. That is, the JM values that IMAGINE reports are those resulting from multiplying the values in the formula times 1000.

A separability listing is a report of the computed divergence for every class pair and one band combination. The listing contains every divergence value for the bands studied for every possible pair of signatures. The separability listing also contains the average divergence and the minimum divergence for the band set. These numbers can be compared to other separability listings (for other band combinations), to determine which set of bands is the most useful for classification.

Weight Factors As with the Bayesian classifier (explained below with maximum likelihood), weight factors may be specified for each signature. These weight factors are based on a priori probabilities that any given pixel is assigned to each class. For example, if you know that twice as many pixels should be assigned to Class A as to Class B, then Class A should receive a weight factor that is twice that of Class B.

NOTE: The weight factors do not influence the divergence equations (for TD or JM), but they do influence the report of the best average and best minimum separability.

The weight factors for each signature are used to compute a weighted divergence with the following calculation:

Page 602: ERDAS Field Guide

572 Classification

Where:i and j = the two signatures (classes) being comparedUij = the unweighted divergence between i and jWij = the weighted divergence between i and jc = the number of signatures (classes)fi = the weight factor for signature i

Probability of ErrorThe Jeffries-Matusita distance is related to the pairwise probability of error, which is the probability that a pixel assigned to class i is actually in class j. Within a range, this probability can be estimated according to the expression below:

Where:i and j = the signatures (classes) being comparedJMij = the Jeffries-Matusita distance between i and jPe = the probability that a pixel is misclassified from i to j

Source: Swain and Davis, 1978

Signature Manipulation In many cases, training must be repeated several times before the desired signatures are produced. Signatures can be gathered from different sources—different training samples, feature space images, and different clustering programs—all using different techniques. After each signature file is evaluated, you may merge, delete, or create new signatures. The desired signatures can finally be moved to one signature file to be used in the classification.

Wij

fi fj Uij

j i 1+=

c

∑⎝ ⎠⎜ ⎟⎜ ⎟⎛ ⎞

i 1=

c 1–

12--- fi

i 1=

c

∑⎝ ⎠⎜ ⎟⎜ ⎟⎛ ⎞ 2

fi2

i 1=

c

∑–

----------------------------------------------------=

116------ 2 JMij

2–( )2

Pe 1 12--- 1 1

2---JMij

2+⎝ ⎠⎛ ⎞–≤ ≤

Page 603: ERDAS Field Guide

Classification 573

The following operations upon signatures and signature files are possible with ERDAS IMAGINE:

• View the contents of the signature statistics

• View histograms of the samples or clusters that were used to derive the signatures

• Delete unwanted signatures

• Merge signatures together, so that they form one larger class when classified

• Append signatures from other files. You can combine signatures that are derived from different training methods for use in one classification.

Use the Signature Editor to view statistics and histogram listings and to delete, merge, append, and rename signatures within a signature file.

Classification Decision Rules

Once a set of reliable signatures has been created and evaluated, the next step is to perform a classification of the data. Each pixel is analyzed independently. The measurement vector for each pixel is compared to each signature, according to a decision rule, or algorithm. Pixels that pass the criteria that are established by the decision rule are then assigned to the class for that signature. ERDAS IMAGINE enables you to classify the data both parametrically with statistical representation, and nonparametrically as objects in feature space. Figure 177 on page 575 shows the flow of an image pixel through the classification decision making process in ERDAS IMAGINE (Kloer, 1994).If a nonparametric rule is not set, then the pixel is classified using only the parametric rule. All of the parametric signatures are tested. If a nonparametric rule is set, the pixel is tested against all of the signatures with nonparametric definitions. This rule results in the following conditions:

• If the nonparametric test results in one unique class, the pixel is assigned to that class.

• If the nonparametric test results in zero classes (i.e., the pixel lies outside all the nonparametric decision boundaries), then the unclassified rule is applied. With this rule, the pixel is either classified by the parametric rule or left unclassified.

Page 604: ERDAS Field Guide

574 Classification

• If the pixel falls into more than one class as a result of the nonparametric test, the overlap rule is applied. With this rule, the pixel is either classified by the parametric rule, processing order, or left unclassified.

Nonparametric Rules ERDAS IMAGINE provides these decision rules for nonparametric signatures:

• parallelepiped

• feature space

Unclassified OptionsERDAS IMAGINE provides these options if the pixel is not classified by the nonparametric rule:

• parametric rule

• unclassified

Overlap OptionsERDAS IMAGINE provides these options if the pixel falls into more than one feature space object:

• parametric rule

• by order

• unclassified

Parametric Rules ERDAS IMAGINE provides these commonly-used decision rules for parametric signatures:

• minimum distance

• Mahalanobis distance

• maximum likelihood (with Bayesian variation)

Page 605: ERDAS Field Guide

Classification 575

Figure 177: Classification Flow Diagram

Parallelepiped In the parallelepiped decision rule, the data file values of the candidate pixel are compared to upper and lower limits. These limits can be either:

• the minimum and maximum data file values of each band in the signature,

• the mean of each band, plus and minus a number of standard deviations, or

• any limits that you specify, based on your knowledge of the data and signatures. This knowledge may come from the signature evaluation techniques discussed above.

Candidate Pixel

No

Yes

Resulting Number of Classes

>1

Unclassified Overlap Options

Parametric RuleUnclassifiedAssignment

ClassAssignment

1

Unclassified

ParametricUnclassifiedParametric By Order

Nonparametric Rule

0

Options

Page 606: ERDAS Field Guide

576 Classification

These limits can be set using the Parallelepiped Limits utility in the Signature Editor.

There are high and low limits for every signature in every band. When a pixel’s data file values are between the limits for every band in a signature, then the pixel is assigned to that signature’s class. Figure 178 is a two-dimensional example of a parallelepiped classification.

Figure 178: Parallelepiped Classification With Two Standard Deviations as Limits

The large rectangles in Figure 178 are called parallelepipeds. They are the regions within the limits for each signature.

Overlap RegionIn cases where a pixel may fall into the overlap region of two or more parallelepipeds, you must define how the pixel can be classified.

• The pixel can be classified by the order of the signatures. If one of the signatures is first and the other signature is fourth, the pixel is assigned to the first signature’s class. This order can be set in the Signature Editor.

μB2+2s

2 2

22

2

2

22

2

22

2

2

2

3

3

3

3

3

3

3

333

3 3

3

11 1 1 11

1

3 3

3

3

3

??

?

?

?

? ?

?

?

?

? ?

?

?

? ?

?

? ?

?

??

???

?

?

? ? ?

?

?

?

?

? 222

22

2

3

class 1

class 2

class 3

μB2-2s

μB2

μ A2+

2s

μ A2-

2sμ A2

Band Adata file values

Ban

d B

data

file

val

ues

μA2 = mean of Band A, class 2

μB2 = mean of Band B, class 2

?

2

l = pixels in class 1= pixels in class 2

= unclassified pixels

3 3 = pixels in class 3

Page 607: ERDAS Field Guide

Classification 577

• The pixel can be classified by the defined parametric decision rule. The pixel is tested against the overlapping signatures only. If neither of these signatures is parametric, then the pixel is left unclassified. If only one of the signatures is parametric, then the pixel is automatically assigned to that signature’s class.

• The pixel can be left unclassified.

Regions Outside of the BoundariesIf the pixel does not fall into one of the parallelepipeds, then you must define how the pixel can be classified.

• The pixel can be classified by the defined parametric decision rule. The pixel is tested against all of the parametric signatures. If none of the signatures is parametric, then the pixel is left unclassified.

• The pixel can be left unclassified.

Use the Supervised Classification utility in the Signature Editor to perform a parallelepiped classification.

Advantages Disadvantages

Fast and simple, since the data file values are compared to limits that remain constant for each band in each signature.

Since parallelepipeds have corners, pixels that are actually quite far, spectrally, from the mean of the signature may be classified. An example of this is shown in Figure 179.

Often useful for a first-pass, broad classification, this decision rule quickly narrows down the number of possible classes to which each pixel can be assigned before the more time-consuming calculations are made, thus cutting processing time (e.g., minimum distance, Mahalanobis distance, or maximum likelihood).

Not dependent on normal distributions.

Page 608: ERDAS Field Guide

578 Classification

Figure 179: Parallelepiped Corners Compared to the Signature Ellipse

Feature Space The feature space decision rule determines whether or not a candidate pixel lies within the nonparametric signature in the feature space image. When a pixel’s data file values are in the feature space signature, then the pixel is assigned to that signature’s class. Figure 180 is a two-dimensional example of a feature space classification. The polygons in this figure are AOIs used to define the feature space signatures.

Figure 180: Feature Space Classification

Overlap RegionIn cases where a pixel may fall into the overlap region of two or more AOIs, you must define how the pixel can be classified.

μ

μ

B

Signature Ellipse

Parallelepipedboundary

A

Parallelepiped Corners

*

candidate pixel

Ban

d B

data

file

val

ues

Band Adata file values

3 3

3

3333

3

ll l

ll

ll

3

3

33

3

class 1

class 3

Band Ad t fil l

Ban

d B

data

file

val

ues

?

3

l

2

= pixels in class 1= pixels in class 2= pixels in class 3= unclassified pixels2

2

2

22

2

22

2

22

2

2

22

2

2 lll

l

l

l

lclass 2

? ?

?

?

?

??

?

?

??

?

??

?

? ?

?

?

?

?

?

?

? ?

?

?

?

?

??

?

? ??

Page 609: ERDAS Field Guide

Classification 579

• The pixel can be classified by the order of the feature space signatures. If one of the signatures is first and the other signature is fourth, the pixel is assigned to the first signature’s class. This order can be set in the Signature Editor.

• The pixel can be classified by the defined parametric decision rule. The pixel is tested against the overlapping signatures only. If neither of these feature space signatures is parametric, then the pixel is left unclassified. If only one of the signatures is parametric, then the pixel is assigned automatically to that signature’s class.

• The pixel can be left unclassified.

Regions Outside of the AOIsIf the pixel does not fall into one of the AOIs for the feature space signatures, then you must define how the pixel can be classified.

• The pixel can be classified by the defined parametric decision rule. The pixel is tested against all of the parametric signatures. If none of the signatures is parametric, then the pixel is left unclassified.

• The pixel can be left unclassified.

Use the Decision Rules utility in the Signature Editor to perform a feature space classification.

Advantages Disadvantages

Often useful for a first-pass, broad classification.

The feature space decision rule allows overlap and unclassified pixels.

Provides an accurate way to classify a class with a nonnormal distribution (e.g., residential and urban).

The feature space image may be difficult to interpret.

Certain features may be more visually identifiable, which can help discriminate between classes that are spectrally similar and hard to differentiate with parametric information.

The feature space method is fast.

Page 610: ERDAS Field Guide

580 Classification

Minimum Distance The minimum distance decision rule (also called spectral distance) calculates the spectral distance between the measurement vector for the candidate pixel and the mean vector for each signature.

Figure 181: Minimum Spectral Distance

In Figure 181 on page 580, spectral distance is illustrated by the lines from the candidate pixel to the means of the three signatures. The candidate pixel is assigned to the class with the closest mean. The equation for classifying by spectral distance is based on the equation for Euclidean distance:

Where:n = number of bands (dimensions)i = a particular bandc = a particular classXxyi = data file value of pixel x,y in band iμci = mean of data file values in band i for the sample for class cSDxyc = spectral distance from pixel x,y to the mean of class c

Source: Swain and Davis, 1978

μB3

μB2

μB1

μA1 μA2 μA3

u

u

u

μ1

μ2

μ3

Band Adata file values

Ban

d B

data

file

val

ues

candidate pixel

oo

SDxyc μci Xxyi–( )2

i 1=

n

∑=

Page 611: ERDAS Field Guide

Classification 581

When spectral distance is computed for all possible values of c (all possible classes), the class of the candidate pixel is assigned to the class for which SD is the lowest.

Mahalanobis Distance

The Mahalanobis distance algorithm assumes that the histograms of the bands have normal distributions. If this is not the case, you may have better results with the parallelepiped or minimum distance decision rule, or by performing a first-pass parallelepiped classification.

Mahalanobis distance is similar to minimum distance, except that the covariance matrix is used in the equation. Variance and covariance are figured in so that clusters that are highly varied lead to similarly varied classes, and vice versa. For example, when classifying urban areas—typically a class whose pixels vary widely—correctly classified pixels may be farther from the mean than those of a class for water, which is usually not a highly varied class (Swain and Davis, 1978). The equation for the Mahalanobis distance classifier is as follows:

D = (X-Mc)T (Covc

-1) (X-Mc)

Advantages Disadvantages

Since every pixel is spectrally closer to either one sample mean or another, there are no unclassified pixels.

Pixels that should be unclassified (i.e., they are not spectrally close to the mean of any sample, within limits that are reasonable to you) become classified. However, this problem is alleviated by thresholding out the pixels that are farthest from the means of their classes. (See the discussion on Thresholding on page 589.)

The fastest decision rule to compute, except for parallelepiped.

Does not consider class variability. For example, a class like an urban land cover class is made up of pixels with a high variance, which may tend to be farther from the mean of the signature. Using this decision rule, outlying urban pixels may be improperly classified. Inversely, a class with less variance, like water, may tend to overclassify (that is, classify more pixels than are appropriate to the class), because the pixels that belong to the class are usually spectrally closer to their mean than those of other classes to their means.

Page 612: ERDAS Field Guide

582 Classification

Where:D = Mahalanobis distancec = a particular classX = the measurement vector of the candidate pixelMc = the mean vector of the signature of class cCovc = the covariance matrix of the pixels in the signature of class

cCovc

-1= inverse of Covc T = transposition function

The pixel is assigned to the class, c, for which D is the lowest.

Maximum Likelihood/Bayesian

The maximum likelihood algorithm assumes that the histograms of the bands of data have normal distributions. If this is not the case, you may have better results with the parallelepiped or minimum distance decision rule, or by performing a first-pass parallelepiped classification.

Advantages Disadvantages

Takes the variability of classes into account, unlike minimum distance or parallelepiped.

Tends to overclassify signatures with relatively large values in the covariance matrix. If there is a large dispersion of the pixels in a cluster or training sample, then the covariance matrix of that signature contains large values.

May be more useful than minimum distance in cases where statistical criteria (as expressed in the covariance matrix) must be taken into account, but the weighting factors that are available with the maximum likelihood/Bayesian option are not needed.

Slower to compute than parallelepiped or minimum distance.

Mahalanobis distance is parametric, meaning that it relies heavily on a normal distribution of the data in each input band.

Page 613: ERDAS Field Guide

Classification 583

The maximum likelihood decision rule is based on the probability that a pixel belongs to a particular class. The basic equation assumes that these probabilities are equal for all classes, and that the input bands have normal distributions.

Bayesian Classifier If you have a priori knowledge that the probabilities are not equal for all classes, you can specify weight factors for particular classes. This variation of the maximum likelihood decision rule is known as the Bayesian decision rule (Hord, 1982). Unless you have a priori knowledge of the probabilities, it is recommended that they not be specified. In this case, these weights default to 1.0 in the equation. The equation for the maximum likelihood/Bayesian classifier is as follows:

D = ln(ac) - [0.5 ln(|Covc|)] - [0.5 (X-Mc)T (Covc-1) (X-Mc)]

Where:D = weighted distance (likelihood)c = a particular classX = the measurement vector of the candidate pixelMc = the mean vector of the sample of class cac = percent probability that any candidate pixel is a member

of class c (defaults to 1.0, or is entered from a priori knowledge)

Covc = the covariance matrix of the pixels in the sample of class c|Covc| = determinant of Covc (matrix algebra)Covc

-1 = inverse of Covc (matrix algebra)ln = natural logarithm functionT = transposition function (matrix algebra)

The inverse and determinant of a matrix, along with the difference and transposition of vectors, would be explained in a textbook of matrix algebra. The pixel is assigned to the class, c, for which D is the lowest.

Advantages Disadvantages

The most accurate of the classifiers in the ERDAS IMAGINE system (if the input samples/clusters have a normal distribution), because it takes the most variables into consideration.

An extensive equation that takes a long time to compute. The computation time increases with the number of input bands.

Page 614: ERDAS Field Guide

584 Classification

Fuzzy Methodology

Fuzzy Classification The Fuzzy Classification method takes into account that there are pixels of mixed make-up, that is, a pixel cannot be definitively assigned to one category. Jensen notes that, “Clearly, there needs to be a way to make the classification algorithms more sensitive to the imprecise (fuzzy) nature of the real world” (Jensen, 1996). Fuzzy classification is designed to help you work with data that may not fall into exactly one category or another. Fuzzy classification works using a membership function, wherein a pixel’s value is determined by whether it is closer to one class than another. A fuzzy classification does not have definite boundaries, and each pixel can belong to several different classes (Jensen, 1996).Like traditional classification, fuzzy classification still uses training, but the biggest difference is that “it is also possible to obtain information on the various constituent classes found in a mixed pixel. . .” (Jensen, 1996). Jensen goes on to explain that the process of collecting training sites in a fuzzy classification is not as strict as a traditional classification. In the fuzzy method, the training sites do not have to have pixels that are exactly the same. Once you have a fuzzy classification, the fuzzy convolution utility allows you to perform a moving window convolution on a fuzzy classification with multiple output class assignments. Using the multilayer classification and distance file, the convolution creates a new single class output file by computing a total weighted distance for all classes in the window.

Fuzzy Convolution The Fuzzy Convolution operation creates a single classification layer by calculating the total weighted inverse distance of all the classes in a window of pixels. Then, it assigns the center pixel in the class with the largest total inverse distance summed over the entire set of fuzzy classification layers.

Takes the variability of classes into account by using the covariance matrix, as does Mahalanobis distance.

Maximum likelihood is parametric, meaning that it relies heavily on a normal distribution of the data in each input band.

Tends to overclassify signatures with relatively large values in the covariance matrix. If there is a large dispersion of the pixels in a cluster or training sample, then the covariance matrix of that signature contains large values.

Advantages Disadvantages

Page 615: ERDAS Field Guide

Classification 585

This has the effect of creating a context-based classification to reduce the speckle or salt and pepper in the classification. Classes with a very small distance value remain unchanged while classes with higher distance values may change to a neighboring value if there is a sufficient number of neighboring pixels with class values and small corresponding distance values. The following equation is used in the calculation:

Where:i = row index of windowj = column index of windows = size of window (3, 5, or 7)l = layer index of fuzzy setn = number of fuzzy layers usedW = weight table for windowk = class valueD[k] = distance file value for class kT[k] = total weighted distance of window for class k

The center pixel is assigned the class with the maximum T[k].

Expert Classification

Expert classification can be performed using the IMAGINE Expert Classifier™. The expert classification software provides a rules-based approach to multispectral image classification, post-classification refinement, and GIS modeling. In essence, an expert classification system is a hierarchy of rules, or a decision tree, that describes the conditions under which a set of low level constituent information gets abstracted into a set of high level informational classes. The constituent information consists of user-defined variables and includes raster imagery, vector coverages, spatial models, external programs, and simple scalars.

T k[ ]wij

Dijl k[ ]----------------

l 0=

n

∑j 0=

s

∑i 0=

s

∑=

Page 616: ERDAS Field Guide

586 Classification

A rule is a conditional statement, or list of conditional statements, about the variable’s data values and/or attributes that determine an informational component or hypotheses. Multiple rules and hypotheses can be linked together into a hierarchy that ultimately describes a final set of target informational classes or terminal hypotheses. Confidence values associated with each condition are also combined to provide a confidence image corresponding to the final output classified image.The IMAGINE Expert Classifier is composed of two parts: the Knowledge Engineer and the Knowledge Classifier. The Knowledge Engineer provides the interface for an expert with first-hand knowledge of the data and the application to identify the variables, rules, and output classes of interest and create the hierarchical decision tree. The Knowledge Classifier provides an interface for a nonexpert to apply the knowledge base and create the output classification.

Knowledge Engineer With the Knowledge Engineer, you can open knowledge bases, which are presented as decision trees in editing windows.

Figure 182: Knowledge Engineer Editing Window

In Figure 182, the upper left corner of the editing window is an overview of the entire decision tree with a green box indicating the position within the knowledge base of the currently displayed portion of the decision tree. This box can be dragged to change the view of the decision tree graphic in the display window on the right. The branch containing the currently selected hypotheses, rule, or condition is highlighted in the overview.

Page 617: ERDAS Field Guide

Classification 587

The decision tree grows in depth when the hypothesis of one rule is referred to by a condition of another rule. The terminal hypotheses of the decision tree represent the final classes of interest. Intermediate hypotheses may also be flagged as being a class of interest. This may occur when there is an association between classes. Figure 183 represents a single branch of a decision tree depicting a hypothesis, its rule, and conditions.

Figure 183: Example of a Decision Tree Branch

In this example, the rule, which is Gentle Southern Slope, determines the hypothesis, Good Location. The rule has four conditions depicted on the right side, all of which must be satisfied for the rule to be true.However, the rule may be split if either Southern or Gentle slope defines the Good Location hypothesis. While both conditions must still be true to fire a rule, only one rule must be true to satisfy the hypothesis.

Figure 184: Split Rule Decision Tree Branch

Variable EditorThe Knowledge Engineer also makes use of a Variable Editor when classifying images. The Variable editor provides for the definition of the variable objects to be used in the rules conditions.

Gentle Southern Slope

Aspect > 135

Aspect <= 225

Slope < 12

Slope > 0

Good Location

Hypothesis Rule

Conditions

Southern SlopeGood Location

Aspect > 135

Aspect <= 225

Slope < 12

Slope > 0

Gentle Slope

Page 618: ERDAS Field Guide

588 Classification

The two types of variables are raster and scalar. Raster variables may be defined by imagery, feature layers (including vector layers), graphic spatial models, or by running other programs. Scalar variables my be defined with an explicit value, or defined as the output from a model or external program.

Evaluating the Output of the Knowledge EngineerThe task of creating a useful, well-constructed knowledge base requires numerous iterations of trial, evaluation, and refinement. To facilitate this process, two options are provided. First, you can use the Test Classification to produce a test classification using the current knowledge base. Second, you can use the Classification Pathway Cursor to evaluate the results. This tool allows you to move a crosshair over the image in a Viewer to establish a confidence level for areas in the image.

Knowledge Classifier The Knowledge Classifier is composed of two parts: an application with a user interface, and a command line executable. The user interface application allows you to input a limited set of parameters to control the use of the knowledge base. The user interface is designed as a wizard to lead you though pages of input parameters.After selecting a knowledge base, you are prompted to select classes. The following is an example classes dialog:

Figure 185: Knowledge Classifier Classes of Interest

After you select the input data for classification, the classification output options, output files, output area, output cell size, and output map projection, the Knowledge Classifier process can begin. An inference engine then evaluates all hypotheses at each location (calculating variable values, if required), and assigns the hypothesis with the highest confidence. The output of the Knowledge Classifier is a thematic image, and optionally, a confidence image.

Page 619: ERDAS Field Guide

Classification 589

Evaluating Classification

After a classification is performed, these methods are available for testing the accuracy of the classification:

• Thresholding—Use a probability image file to screen out misclassified pixels.

• Accuracy Assessment—Compare the classification to ground truth or other data.

Thresholding Thresholding is the process of identifying the pixels in a classified image that are the most likely to be classified incorrectly. These pixels are put into another class (usually class 0). These pixels are identified statistically, based upon the distance measures that were used in the classification decision rule.

Distance File When a minimum distance, Mahalanobis distance, or maximum likelihood classification is performed, a distance image file can be produced in addition to the output thematic raster layer. A distance image file is a one-band, 32-bit offset continuous raster layer in which each data file value represents the result of a spectral distance equation, depending upon the decision rule used.

• In a minimum distance classification, each distance value is the Euclidean spectral distance between the measurement vector of the pixel and the mean vector of the pixel’s class.

• In a Mahalanobis distance or maximum likelihood classification, the distance value is the Mahalanobis distance between the measurement vector of the pixel and the mean vector of the pixel’s class.

The brighter pixels (with the higher distance file values) are spectrally farther from the signature means for the classes to which they re assigned. They are more likely to be misclassified. The darker pixels are spectrally nearer, and more likely to be classified correctly. If supervised training was used, the darkest pixels are usually the training samples.

Page 620: ERDAS Field Guide

590 Classification

Figure 186: Histogram of a Distance Image

Figure 186 shows how the histogram of the distance image usually appears. This distribution is called a chi-square distribution, as opposed to a normal distribution, which is a symmetrical bell curve.

Threshold The pixels that are the most likely to be misclassified have the higher distance file values at the tail of this histogram. At some point that you define—either mathematically or visually—the tail of this histogram is cut off. The cutoff point is the threshold. To determine the threshold:

• interactively change the threshold with the mouse, when a distance histogram is displayed while using the threshold function. This option enables you to select a chi-square value by selecting the cut-off value in the distance histogram, or

• input a chi-square parameter or distance measurement, so that the threshold can be calculated statistically.

In both cases, thresholding has the effect of cutting the tail off of the histogram of the distance image file, representing the pixels with the highest distance values.

distance value

num

ber

of p

ixel

s

00

Page 621: ERDAS Field Guide

Classification 591

Figure 187: Interactive Thresholding Tips

Figure 187 shows some example distance histograms. With each example is an explanation of what the curve might mean, and how to threshold it.

Chi-square Statistics If the minimum distance classifier is used, then the threshold is simply a certain spectral distance. However, if Mahalanobis or maximum likelihood are used, then chi-square statistics are used to compare probabilities (Swain and Davis, 1978). When statistics are used to calculate the threshold, the threshold is more clearly defined as follows: T is the distance value at which C% of the pixels in a class have a distance value greater than or equal to T. Where:

T = the threshold for a classC% = the percentage of pixels that are believed to be

misclassified, known as the confidence level

Smooth chi-square shape—try to find the breakpoint where the curve becomes more horizontal, and cut off the tail.

Minor mode(s) (peaks) in the curve probably indicate that the class picked up other features that were not represented in the signature. You probably want to threshold these features out.

Not a good class. The signature for this class probably represented a polymodal (multipeaked) distribution.

Peak of the curve is shifted from 0. Indicates that the signature mean is off-center from the pixels it represents. You may need to take a new signature and reclassify.

Page 622: ERDAS Field Guide

592 Classification

T is related to the distance values by means of chi-square statistics. The value X2 (chi-squared) is used in the equation. X2 is a function of:

• the number of bands of data used—known in chi-square statistics as the number of degrees of freedom

• the confidence level

When classifying an image in ERDAS IMAGINE, the classified image automatically has the degrees of freedom (i.e., number of bands) used for the classification. The chi-square table is built into the threshold application.

NOTE: In this application of chi-square statistics, the value of X2 is an approximation. Chi-square statistics are generally applied to independent variables (having no covariance), which is not usually true of image data.

A further discussion of chi-square statistics can be found in a statistics text.

Use the Classification Threshold utility to perform the thresholding.

Accuracy Assessment Accuracy assessment is a general term for comparing the classification to geographical data that are assumed to be true, in order to determine the accuracy of the classification process. Usually, the assumed-true data are derived from ground truth data. It is usually not practical to ground truth or otherwise test every pixel of a classified image. Therefore, a set of reference pixels is usually used. Reference pixels are points on the classified image for which actual data are (or will be) known. The reference pixels are randomly selected (Congalton, R. 1991. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data. Remote Sensing of Environment 37: 35-46.).

NOTE: You can use the ERDAS IMAGINE Accuracy Assessment utility to perform an accuracy assessment for any thematic layer. This layer does not have to be classified by ERDAS IMAGINE (e.g., you can run an accuracy assessment on a thematic layer that was classified in ERDAS Version 7.5 and imported into ERDAS IMAGINE).

Page 623: ERDAS Field Guide

Classification 593

Random Reference PixelsWhen reference pixels are selected by the analyst, it is often tempting to select the same pixels for testing the classification that were used in the training samples. This biases the test, since the training samples are the basis of the classification. By allowing the reference pixels to be selected at random, the possibility of bias is lessened or eliminated (Congalton, R. 1991. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data. Remote Sensing of Environment 37: 35-46.).The number of reference pixels is an important factor in determining the accuracy of the classification. It has been shown that more than 250 reference pixels are needed to estimate the mean accuracy of a class to within plus or minus five percent (Congalton, R. 1991. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data. Remote Sensing of Environment 37: 35-46.). ERDAS IMAGINE uses a square window to select the reference pixels. The size of the window can be defined by you. Three different types of distribution are offered for selecting the random pixels:

• random—no rules are used

• stratified random—the number of points is stratified to the distribution of thematic layer classes

• equalized random—each class has an equal number of random points

Use the Accuracy Assessment utility to generate random reference points.

Accuracy Assessment CellArrayAn Accuracy Assessment CellArray is created to compare the classified image with reference data. This CellArray is simply a list of class values for the pixels in the classified image file and the class values for the corresponding reference pixels. The class values for the reference pixels are input by you. The CellArray data reside in an image file.

Use the Accuracy Assessment CellArray to enter reference pixels for the class values.

Error Reports From the Accuracy Assessment CellArray, two kinds of reports can be derived.

Page 624: ERDAS Field Guide

594 Classification

• The error matrix simply compares the reference points to the classified points in a c × c matrix, where c is the number of classes (including class 0).

• The accuracy report calculates statistics of the percentages of accuracy, based upon the results of the error matrix.

When interpreting the reports, it is important to observe the percentage of correctly classified pixels and to determine the nature of errors of the producer and yourself.

Use the Accuracy Assessment utility to generate the error matrix and accuracy reports.

Kappa CoefficientThe Kappa coefficient expresses the proportionate reduction in error generated by a classification process compared with the error of a completely random classification. For example, a value of .82 implies that the classification process is avoiding 82 percent of the errors that a completely random classification generates (Congalton, R. 1991. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data. Remote Sensing of Environment 37: 35-46.).

Page 625: ERDAS Field Guide

595Photogrammetric Concepts

Photogrammetric Concepts 595

Photogrammetric Concepts

Introduction

What is Photogrammetry?

Photogrammetry is the “art, science and technology of obtaining reliable information about physical objects and the environment through the process of recording, measuring and interpreting photographic images and patterns of electromagnetic radiant imagery and other phenomena” (American Society of Photogrammetry, 1980).Photogrammetry was invented in 1851 by Laussedat, and has continued to develop over the last 140 years. Over time, the development of photogrammetry has passed through the phases of plane table photogrammetry, analog photogrammetry, analytical photogrammetry, and has now entered the phase of digital photogrammetry (Konecny, 1994).The traditional, and largest, application of photogrammetry is to extract topographic information (e.g., topographic maps) from aerial images. However, photogrammetric techniques have also been applied to process satellite images and close range images in order to acquire topographic or nontopographic information of photographed objects.Prior to the invention of the airplane, photographs taken on the ground were used to extract the relationships between objects using geometric principles. This was during the phase of plane table photogrammetry.In analog photogrammetry, starting with stereomeasurement in 1901, optical or mechanical instruments were used to reconstruct three-dimensional geometry from two overlapping photographs. The main product during this phase was topographic maps.In analytical photogrammetry, the computer replaces some expensive optical and mechanical components. The resulting devices were analog/digital hybrids. Analytical aerotriangulation, analytical plotters, and orthophoto projectors were the main developments during this phase. Outputs of analytical photogrammetry can be topographic maps, but can also be digital products, such as digital maps and DEMs.Digital photogrammetry is photogrammetry as applied to digital images that are stored and processed on a computer. Digital images can be scanned from photographs or can be directly captured by digital cameras. Many photogrammetric tasks can be highly automated in digital photogrammetry (e.g., automatic DEM extraction and digital orthophoto generation). Digital photogrammetry is sometimes called softcopy photogrammetry. The output products are in digital form, such as digital maps, DEMs, and digital orthophotos saved on computer storage media. Therefore, they can be easily stored, managed, and applied by you. With the development of digital photogrammetry, photogrammetric techniques are more closely integrated into remote sensing and GIS.

Page 626: ERDAS Field Guide

596 Photogrammetric Concepts

Digital photogrammetric systems employ sophisticated software to automate the tasks associated with conventional photogrammetry, thereby minimizing the extent of manual interaction required to perform photogrammetric operations. LPS Project Manager is such a photogrammetric system.Photogrammetry can be used to measure and interpret information from hardcopy photographs or images. Sometimes the process of measuring information from photography and satellite imagery is considered metric photogrammetry, such as creating DEMs. Interpreting information from photography and imagery is considered interpretative photogrammetry, such as identifying and discriminating between various tree types as represented on a photograph or image (Wolf, 1983).

Types of Photographs and Images

The types of photographs and images that can be processed within IMAGINE LPS Project Manager include aerial, terrestrial, close range, and oblique. Aerial or vertical (near vertical) photographs and images are taken from a high vantage point above the Earth’s surface. The camera axis of aerial or vertical photography is commonly directed vertically (or near vertically) down. Aerial photographs and images are commonly used for topographic and planimetric mapping projects. Aerial photographs and images are commonly captured from an aircraft or satellite.Terrestrial or ground-based photographs and images are taken with the camera stationed on or close to the Earth’s surface. Terrestrial and close range photographs and images are commonly used for applications involved with archeology, geomorphology, civil engineering, architecture, industry, etc. Oblique photographs and images are similar to aerial photographs and images, except the camera axis is intentionally inclined at an angle with the vertical. Oblique photographs and images are commonly used for reconnaissance and corridor mapping applications.Digital photogrammetric systems use digitized photographs or digital images as the primary source of input. Digital imagery can be obtained from various sources. These include:

• Digitizing existing hardcopy photographs

• Using digital cameras to record imagery

• Using sensors on board satellites such as Landsat and SPOT to record imagery

Page 627: ERDAS Field Guide

Photogrammetric Concepts 597

This document uses the term imagery in reference to photography and imagery obtained from various sources. This includes aerial and terrestrial photography, digital and video camera imagery, 35 mm photography, medium to large format photography, scanned photography, and satellite imagery.

Why use Photogrammetry?

As stated in the previous section, raw aerial photography and satellite imagery have large geometric distortion that is caused by various systematic and nonsystematic factors. The photogrammetric modeling based on collinearity equations eliminates these errors most efficiently, and creates the most reliable orthoimages from the raw imagery. It is unique in terms of considering the image-forming geometry, utilizing information between overlapping images, and explicitly dealing with the third dimension: elevation.In addition to orthoimages, photogrammetry can also provide other geographic information such as a DEM, topographic features, and line maps reliably and efficiently. In essence, photogrammetry produces accurate and precise geographic information from a wide range of photographs and images. Any measurement taken on a photogrammetrically processed photograph or image reflects a measurement taken on the ground. Rather than constantly go to the field to measure distances, areas, angles, and point positions on the Earth’s surface, photogrammetric tools allow for the accurate collection of information from imagery. Photogrammetric approaches for collecting geographic information save time and money, and maintain the highest accuracies.

Photogrammetry/ Conventional Geometric Correction

Conventional techniques of geometric correction such as polynomial transformation are based on general functions not directly related to the specific distortion or error sources. They have been successful in the field of remote sensing and GIS applications, especially when dealing with low resolution and narrow field of view satellite imagery such as Landsat and SPOT data (Yang, 1997). General functions have the advantage of simplicity. They can provide a reasonable geometric modeling alternative when little is known about the geometric nature of the image data.

Page 628: ERDAS Field Guide

598 Photogrammetric Concepts

However, conventional techniques generally process the images one at a time. They cannot provide an integrated solution for multiple images or photographs simultaneously and efficiently. It is very difficult, if not impossible, for conventional techniques to achieve a reasonable accuracy without a great number of GCPs when dealing with large-scale imagery, images having severe systematic and/or nonsystematic errors, and images covering rough terrain. Misalignment is more likely to occur when mosaicking separately rectified images. This misalignment could result in inaccurate geographic information being collected from the rectified images. Furthermore, it is impossible for a conventional technique to create a three-dimensional stereo model or to extract the elevation information from two overlapping images. There is no way for conventional techniques to accurately derive geometric information about the sensor that captured the imagery.Photogrammetric techniques overcome all the problems mentioned above by using least squares bundle block adjustment. This solution is integrated and accurate.IMAGINE LPS Project Manager can process hundreds of images or photographs with very few GCPs, while at the same time eliminating the misalignment problem associated with creating image mosaics. In short, less time, less money, less manual effort, but more geographic fidelity can be realized using the photogrammetric solution.

Single Frame Orthorectification/Block Triangulation

Single frame orthorectification techniques orthorectify one image at a time using a technique known as space resection. In this respect, a minimum of three GCPs is required for each image. For example, in order to orthorectify 50 aerial photographs, a minimum of 150 GCPs is required. This includes manually identifying and measuring each GCP for each image individually. Once the GCPs are measured, space resection techniques compute the camera/sensor position and orientation as it existed at the time of data capture. This information, along with a DEM, is used to account for the negative impacts associated with geometric errors. Additional variables associated with systematic error are not considered.Single frame orthorectification techniques do not utilize the internal relationship between adjacent images in a block to minimize and distribute the errors commonly associated with GCPs, image measurements, DEMs, and camera/sensor information. Therefore, during the mosaic procedure, misalignment between adjacent images is common since error has not been minimized and distributed throughout the block.

Page 629: ERDAS Field Guide

Photogrammetric Concepts 599

Aerial or block triangulation is the process of establishing a mathematical relationship between the images contained in a project, the camera or sensor model, and the ground. The information resulting from aerial triangulation is required as input for the orthorectification, DEM, and stereopair creation processes. The term aerial triangulation is commonly used when processing aerial photography and imagery. The term block triangulation, or simply triangulation, is used when processing satellite imagery. The techniques differ slightly as a function of the type of imagery being processed.Classic aerial triangulation using optical-mechanical analog and analytical stereo plotters is primarily used for the collection of GCPs using a technique known as control point extension. Since the cost of collecting GCPs is very large, photogrammetric techniques are accepted as the ideal approach for collecting GCPs over large areas using photography rather than conventional ground surveying techniques. Control point extension involves the manual photo measurement of ground points appearing on overlapping images. These ground points are commonly referred to as tie points. Once the points are measured, the ground coordinates associated with the tie points can be determined using photogrammetric techniques employed by analog or analytical stereo plotters. These points are then referred to as ground control points (GCPs).With the advent of digital photogrammetry, classic aerial triangulation has been extended to provide greater functionality. IMAGINE LPS Project Manager uses a mathematical technique known as bundle block adjustment for aerial triangulation. Bundle block adjustment provides three primary functions:

• To determine the position and orientation for each image in a project as they existed at the time of photographic or image exposure. The resulting parameters are referred to as exterior orientation parameters. In order to estimate the exterior orientation parameters, a minimum of three GCPs is required for the entire block, regardless of how many images are contained within the project.

• To determine the ground coordinates of any tie points manually or automatically measured on the overlap areas of multiple images. The highly precise ground point determination of tie points is useful for generating control points from imagery in lieu of ground surveying techniques. Additionally, if a large number of ground points is generated, then a DEM can be interpolated using the Create Surface tool in ERDAS IMAGINE.

Page 630: ERDAS Field Guide

600 Photogrammetric Concepts

• To minimize and distribute the errors associated with the imagery, image measurements, GCPs, and so forth. The bundle block adjustment processes information from an entire block of imagery in one simultaneous solution (i.e., a bundle) using statistical techniques (i.e., adjustment component) to automatically identify, distribute, and remove error.

Because the images are processed in one step, the misalignment issues associated with creating mosaics are resolved.

Image and Data Acquisition

During photographic or image collection, overlapping images are exposed along a direction of flight. Most photogrammetric applications involve the use of overlapping images. In using more than one image, the geometry associated with the camera/sensor, image, and ground can be defined to greater accuracies and precision. During the collection of imagery, each point in the flight path at which the camera exposes the film, or the sensor captures the imagery, is called an exposure station (see Figure 188).

Figure 188: Exposure Stations Along a Flight Path

Each photograph or image that is exposed has a corresponding image scale associated with it. The image scale expresses the average ratio between a distance in the image and the same distance on the ground. It is computed as focal length divided by the flying height above the mean ground elevation. For example, with a flying height of 1000 m and a focal length of 15 cm, the image scale (SI) would be 1:6667.

NOTE: The flying height above ground is used, versus the altitude above sea level.

A strip of photographs consists of images captured along a flight line, normally with an overlap of 60%. All photos in the strip are assumed to be taken at approximately the same flying height and with a constant distance between exposure stations. Camera tilt relative to the vertical is assumed to be minimal.

Flight pathof airplane

Exposure station

Flight Line 1

Flight Line 2

Flight Line 3

Page 631: ERDAS Field Guide

Photogrammetric Concepts 601

The photographs from several flight paths can be combined to form a block of photographs. A block of photographs consists of a number of parallel strips, normally with a sidelap of 20-30%. Block triangulation techniques are used to transform all of the images in a block and ground points into a homologous coordinate system. A regular block of photos is a rectangular block in which the number of photos in each strip is the same. Figure 189 shows a block of 5 × 2 photographs.

Figure 189: A Regular Rectangular Block of Aerial Photos

Photogrammetric Scanners

Photogrammetric quality scanners are special devices capable of high image quality and excellent positional accuracy. Use of this type of scanner results in geometric accuracies similar to traditional analog and analytical photogrammetric instruments. These scanners are necessary for digital photogrammetric applications that have high accuracy requirements. These units usually scan only film because film is superior to paper, both in terms of image detail and geometry. These units usually have a Root Mean Square Error (RMSE) positional accuracy of 4 microns or less, and are capable of scanning at a maximum resolution of 5 to 10 microns (5 microns is equivalent to approximately 5,000 pixels per inch). The required pixel resolution varies depending on the application. Aerial triangulation and feature collection applications often scan in the 10- to 15-micron range. Orthophoto applications often use 15- to 30-micron pixels. Color film is less sharp than panchromatic, therefore color ortho applications often use 20- to 40-micron pixels.

Desktop Scanners Desktop scanners are general purpose devices. They lack the image detail and geometric accuracy of photogrammetric quality units, but they are much less expensive. When using a desktop scanner, you should make sure that the active area is at least 9 × 9 inches (i.e., A3 type scanners), enabling you to capture the entire photo frame.

Flying direction

Strip 2

Strip 1

60% overlap

20-30% sidelap

Photographicblock

Page 632: ERDAS Field Guide

602 Photogrammetric Concepts

Desktop scanners are appropriate for less rigorous uses, such as digital photogrammetry in support of GIS or remote sensing applications. Calibrating these units improves geometric accuracy, but the results are still inferior to photogrammetric units. The image correlation techniques that are necessary for automatic tie point collection and elevation extraction are often sensitive to scan quality. Therefore, errors can be introduced into the photogrammetric solution that are attributable to scanning errors. IMAGINE LPS Project Manager accounts for systematic errors attributed to scanning errors.

Scanning Resolutions One of the primary factors contributing to the overall accuracy of block triangulation and orthorectification is the resolution of the imagery being used. Image resolution is commonly determined by the scanning resolution (if film photography is being used), or by the pixel resolution of the sensor. In order to optimize the attainable accuracy of a solution, the scanning resolution must be considered. The appropriate scanning resolution is determined by balancing the accuracy requirements versus the size of the mapping project and the time required to process the project. Table 52 lists the scanning resolutions associated with various scales of photography and image file size.

Table 52: Scanning Resolutions

12 microns(2117 dpi1)

16 microns(1588 dpi)

25 microns(1016 dpi)

50 microns(508 dpi)

85 microns(300 dpi)

Photo Scale1 to

Ground Coverage (meters)

Ground Coverage (meters)

Ground Coverage (meters)

Ground Coverage (meters)

Ground Coverage (meters)

1800 0.0216 0.0288 0.045 0.09 0.153

2400 0.0288 0.0384 0.06 0.12 0.204

3000 0.036 0.048 0.075 0.15 0.255

3600 0.0432 0.0576 0.09 0.18 0.306

4200 0.0504 0.0672 0.105 0.21 0.357

4800 0.0576 0.0768 0.12 0.24 0.408

5400 0.0648 0.0864 0.135 0.27 0.459

6000 0.072 0.096 0.15 0.3 0.51

6600 0.0792 0.1056 0.165 0.33 0.561

7200 0.0864 0.1152 0.18 0.36 0.612

7800 0.0936 0.1248 0.195 0.39 0.663

8400 0.1008 0.1344 0.21 0.42 0.714

Page 633: ERDAS Field Guide

Photogrammetric Concepts 603

1 dots per inch

The ground coverage column refers to the ground coverage per pixel. Thus, a 1:40000 scale photograph scanned at 25 microns [1016 dots per inch (dpi)] has a ground coverage per pixel of 1 m × 1 m. The resulting file size is approximately 85 MB, assuming a square 9 × 9 inch photograph.

Coordinate Systems Conceptually, photogrammetry involves establishing the relationship between the camera or sensor used to capture imagery, the imagery itself, and the ground. In order to understand and define this relationship, each of the three variables associated with the relationship must be defined with respect to a coordinate space and coordinate system.

Pixel Coordinate SystemThe file coordinates of a digital image are defined in a pixel coordinate system. A pixel coordinate system is usually a coordinate system with its origin in the upper-left corner of the image, the x-axis pointing to the right, the y-axis pointing downward, and the unit in pixels, as shown by axis c and r in Figure 190. These file coordinates (c, r) can also be thought of as the pixel column and row number. This coordinate system is referenced as pixel coordinates (c, r) in this chapter.

9000 0.108 0.144 0.225 0.45 0.765

9600 0.1152 0.1536 0.24 0.48 0.816

10800 0.1296 0.1728 0.27 0.54 0.918

12000 0.144 0.192 0.3 0.6 1.02

15000 0.18 0.24 0.375 0.75 1.275

18000 0.216 0.288 0.45 0.9 1.53

24000 0.288 0.384 0.6 1.2 2.04

30000 0.36 0.48 0.75 1.5 2.55

40000 0.48 0.64 1 2 3.4

50000 0.6 0.8 1.25 2.5 4.25

60000 0.72 0.96 1.5 3 5.1

B/W File Size (MB) 363 204 84 21 7

Color File Size (MB) 1089 612 252 63 21

Table 52: Scanning Resolutions (Continued)

12 microns(2117 dpi1)

16 microns(1588 dpi)

25 microns(1016 dpi)

50 microns(508 dpi)

85 microns(300 dpi)

Page 634: ERDAS Field Guide

604 Photogrammetric Concepts

Figure 190: Pixel Coordinates and Image Coordinates

Image Coordinate SystemAn image coordinate system or an image plane coordinate system is usually defined as a two-dimensional coordinate system occurring on the image plane with its origin at the image center, normally at the principal point or at the intersection of the fiducial marks as illustrated by axis x and y in Figure 190. Image coordinates are used to describe positions on the film plane. Image coordinate units are usually millimeters or microns. This coordinate system is referenced as image coordinates (x, y) in this chapter.

Image Space Coordinate SystemAn image space coordinate system (Figure 191) is identical to image coordinates, except that it adds a third axis (z). The origin of the image space coordinate system is defined at the perspective center S as shown in Figure 191. Its x-axis and y-axis are parallel to the x-axis and y-axis in the image plane coordinate system. The z-axis is the optical axis, therefore the z value of an image point in the image space coordinate system is usually equal to -f (focal length). Image space coordinates are used to describe positions inside the camera and usually use units in millimeters or microns. This coordinate system is referenced as image space coordinates (x, y, z) in this chapter.

y

x

c

r

Origin of pixelcoordinate

Origin of imagecoordinate system

system

Page 635: ERDAS Field Guide

Photogrammetric Concepts 605

Figure 191: Image Space and Ground Space Coordinate System

Ground Coordinate SystemA ground coordinate system is usually defined as a three-dimensional coordinate system that utilizes a known map projection. Ground coordinates (X,Y,Z) are usually expressed in feet or meters. The Z value is elevation above mean sea level for a given vertical datum. This coordinate system is referenced as ground coordinates (X,Y,Z) in this chapter.

Geocentric and Topocentric Coordinate SystemMost photogrammetric applications account for the Earth’s curvature in their calculations. This is done by adding a correction value or by computing geometry in a coordinate system which includes curvature. Two such systems are geocentric and topocentric coordinates.A geocentric coordinate system has its origin at the center of the Earth ellipsoid. The Z-axis equals the rotational axis of the Earth, and the X-axis passes through the Greenwich meridian. The Y-axis is perpendicular to both the Z-axis and X-axis, so as to create a three-dimensional coordinate system that follows the right hand rule.

z

y

x

Image coordinate system

S

f

a

o

Z Height

Y

X

A

Ground coordinate system

Page 636: ERDAS Field Guide

606 Photogrammetric Concepts

A topocentric coordinate system has its origin at the center of the image projected on the Earth ellipsoid. The three perpendicular coordinate axes are defined on a tangential plane at this center point. The plane is called the reference plane or the local datum. The x-axis is oriented eastward, the y-axis northward, and the z-axis is vertical to the reference plane (up).For simplicity of presentation, the remainder of this chapter does not explicitly reference geocentric or topocentric coordinates. Basic photogrammetric principles can be presented without adding this additional level of complexity.

Terrestrial Photography Photogrammetric applications associated with terrestrial or ground-based images utilize slightly different image and ground space coordinate systems. Figure 192 illustrates the two coordinate systems associated with image space and ground space.

Figure 192: Terrestrial Photography

The image and ground space coordinate systems are right-handed coordinate systems. Most terrestrial applications use a ground space coordinate system that was defined using a localized Cartesian coordinate system.

ω

ϕ

κ

YG

XG

ZGXA

ZAYA

Ground space

Ground point A

y

xz

xa’ a’ya’

Perspective Center

Image space

ZL

YLXL

Y

X

ϕ'

κ'

ω'

Z

Page 637: ERDAS Field Guide

Photogrammetric Concepts 607

The image space coordinate system directs the z-axis toward the imaged object and the y-axis directed North up. The image x-axis is similar to that used in aerial applications. The XL, YL, and ZL coordinates define the position of the perspective center as it existed at the time of image capture. The ground coordinates of ground point A (XA, YA, and ZA) are defined within the ground space coordinate system (XG, YG, and ZG). With this definition, the rotation angles ω, ϕ, and κ are still defined as in the aerial photography conventions. In IMAGINE LPS Project Manager, you can also use the ground (X, Y, Z) coordinate system to directly define GCPs. Thus, GCPs do not need to be transformed. Then the definition of rotation angles ω’, ϕ’, and κ’ are different, as shown in Figure 192.

Interior Orientation Interior orientation defines the internal geometry of a camera or sensor as it existed at the time of data capture. The variables associated with image space are defined during the process of interior orientation. Interior orientation is primarily used to transform the image pixel coordinate system or other image coordinate measurement system to the image space coordinate system.Figure 193 illustrates the variables associated with the internal geometry of an image captured from an aerial camera, where o represents the principal point and a represents an image point.

Figure 193: Internal Geometry

The internal geometry of a camera is defined by specifying the following variables:

• Principal point

• Focal length

Perspective Center

y

z

x

Focal length Fiducial mark

Image plane

yo ya’

axa’

xo O

Page 638: ERDAS Field Guide

608 Photogrammetric Concepts

• Fiducial marks

• Lens distortion

Principal Point and Focal Length

The principal point is mathematically defined as the intersection of the perpendicular line through the perspective center of the image plane. The length from the principal point to the perspective center is called the focal length (Wang, Z., 1990).The image plane is commonly referred to as the focal plane. For wide-angle aerial cameras, the focal length is approximately 152 mm, or 6 inches. For some digital cameras, the focal length is 28 mm. Prior to conducting photogrammetric projects, the focal length of a metric camera is accurately determined or calibrated in a laboratory environment.This mathematical definition is the basis of triangulation, but difficult to determine optically. The optical definition of principal point is the image position where the optical axis intersects the image plane. In the laboratory, this is calibrated in two forms: principal point of autocollimation and principal point of symmetry, which can be seen from the camera calibration report. Most applications prefer to use the principal point of symmetry since it can best compensate for the lens distortion.

Fiducial Marks As stated previously, one of the steps associated with interior orientation involves determining the image position of the principal point for each image in the project. Therefore, the image positions of the fiducial marks are measured on the image, and subsequently compared to the calibrated coordinates of each fiducial mark.Since the image space coordinate system has not yet been defined for each image, the measured image coordinates of the fiducial marks are referenced to a pixel or file coordinate system. The pixel coordinate system has an x coordinate (column) and a y coordinate (row). The origin of the pixel coordinate system is the upper left corner of the image having a row and column value of 0 and 0, respectively. Figure 194 illustrates the difference between the pixel coordinate system and the image space coordinate system.

Page 639: ERDAS Field Guide

Photogrammetric Concepts 609

Figure 194: Pixel Coordinate System vs. Image Space Coordinate System

Using a two-dimensional affine transformation, the relationship between the pixel coordinate system and the image space coordinate system is defined. The following two-dimensional affine transformation equations can be used to determine the coefficients required to transform pixel coordinate measurements to the image coordinates:

The x and y image coordinates associated with the calibrated fiducial marks and the X and Y pixel coordinates of the measured fiducial marks are used to determine six affine transformation coefficients. The resulting six coefficients can then be used to transform each set of row (y) and column (x) pixel coordinates to image coordinates.The quality of the two-dimensional affine transformation is represented using a root mean square (RMS) error. The RMS error represents the degree of correspondence between the calibrated fiducial mark coordinates and their respective measured image coordinate values. Large RMS errors indicate poor correspondence. This can be attributed to film deformation, poor scanning quality, out-of-date calibration information, or image mismeasurement. The affine transformation also defines the translation between the origin of the pixel coordinate system and the image coordinate system (xo-file and yo-file). Additionally, the affine transformation takes into consideration rotation of the image coordinate system by considering angle Θ (see Figure 194). A scanned image of an aerial photograph is normally rotated due to the scanning procedure.

Θ

Fiducial mark

Ya-file Yo-file

Xa-file

Xo-file

xa

a

ya

x a1 a2X a3Y+ +=

y b1 b2X b3Y+ +=

Page 640: ERDAS Field Guide

610 Photogrammetric Concepts

The degree of variation between the x- and y-axis is referred to as nonorthogonality. The two-dimensional affine transformation also considers the extent of nonorthogonality. The scale difference between the x-axis and the y-axis is also considered using the affine transformation.

Lens Distortion Lens distortion deteriorates the positional accuracy of image points located on the image plane. Two types of radial lens distortion exist: radial and tangential lens distortion. Lens distortion occurs when light rays passing through the lens are bent, thereby changing directions and intersecting the image plane at positions deviant from the norm. Figure 195 illustrates the difference between radial and tangential lens distortion.

Figure 195: Radial vs. Tangential Lens Distortion

Radial lens distortion causes imaged points to be distorted along radial lines from the principal point o. The effect of radial lens distortion is represented as Δr. Radial lens distortion is also commonly referred to as symmetric lens distortion. Tangential lens distortion occurs at right angles to the radial lines from the principal point. The effect of tangential lens distortion is represented as Δt. Since tangential lens distortion is much smaller in magnitude than radial lens distortion, it is considered negligible.The effects of lens distortion are commonly determined in a laboratory during the camera calibration procedure. The effects of radial lens distortion throughout an image can be approximated using a polynomial. The following polynomial is used to determine coefficients associated with radial lens distortion:

Δr Δt

radial distance (r)

x

y

o

Δr k0r k1r3 k2r5+ +=

Page 641: ERDAS Field Guide

Photogrammetric Concepts 611

Δr represents the radial distortion along a radial distance r from the principal point (Wolf, 1983). In most camera calibration reports, the lens distortion value is provided as a function of radial distance from the principal point or field angle. IMAGINE LPS Project Manager accommodates radial lens distortion parameters in both scenarios.Three coefficients (k0, k1, and k2) are computed using statistical techniques. Once the coefficients are computed, each measurement taken on an image is corrected for radial lens distortion.

Exterior Orientation

Exterior orientation defines the position and angular orientation associated with an image. The variables defining the position and orientation of an image are referred to as the elements of exterior orientation. The elements of exterior orientation define the characteristics associated with an image at the time of exposure or capture. The positional elements of exterior orientation include Xo, Yo, and Zo. They define the position of the perspective center (O) with respect to the ground space coordinate system (X, Y, and Z). Zo is commonly referred to as the height of the camera above sea level, which is commonly defined by a datum.The angular or rotational elements of exterior orientation describe the relationship between the ground space coordinate system (X, Y, and Z) and the image space coordinate system (x, y, and z). Three rotation angles are commonly used to define angular orientation. They are omega (ω), phi (ϕ), and kappa (κ). Figure 196 illustrates the elements of exterior orientation.

Page 642: ERDAS Field Guide

612 Photogrammetric Concepts

Figure 196: Elements of Exterior Orientation

Omega is a rotation about the photographic x-axis, phi is a rotation about the photographic y-axis, and kappa is a rotation about the photographic z-axis, which are defined as being positive if they are counterclockwise when viewed from the positive end of their respective axis. Different conventions are used to define the order and direction of the three rotation angles (Wang, Z., 1990). The ISPRS recommends the use of the ω, ϕ, κ convention. The photographic z-axis is equivalent to the optical axis (focal length). The x’, y’, and z’ coordinates are parallel to the ground space coordinate system.Using the three rotation angles, the relationship between the image space coordinate system (x, y, and z) and ground space coordinate system (X, Y, and Z or x’, y’, and z’) can be determined. A 3 × 3 matrix defining the relationship between the two systems is used. This is referred to as the orientation or rotation matrix, M. The rotation matrix can be defined as follows:

X

Y

Z

Xp

Yp

Zp

Xo

Zo

Yo

O

o p

Ground Point P

ωκϕ

x

yz

ypxp

f

Page 643: ERDAS Field Guide

Photogrammetric Concepts 613

The rotation matrix is derived by applying a sequential rotation of omega about the x-axis, phi about the y-axis, and kappa about the z-axis.

The Collinearity Equation The following section defines the relationship between the camera/sensor, the image, and the ground. Most photogrammetric tools utilize the following formulations in one form or another.With reference to Figure 196, an image vector a can be defined as the vector from the exposure station O to the image point p. A ground space or object space vector A can be defined as the vector from the exposure station O to the ground point P. The image vector and ground vector are collinear, inferring that a line extending from the exposure station to the image point and to the ground is linear. The image vector and ground vector are only collinear if one is a scalar multiple of the other. Therefore, the following statement can be made:

Where k is a scalar multiple. The image and ground vectors must be within the same coordinate system. Therefore, image vector a is comprised of the following components:

Where xo and yo represent the image coordinates of the principal point.

Similarly, the ground vector can be formulated as follows:

In order for the image and ground vectors to be within the same coordinate system, the ground vector must be multiplied by the rotation matrix M. The following equation can be formulated:

Mm11 m12 m13

m21 m22 m23

m31 m32 m33

=

a kA=

axp xo–yp yo–

f–

=

AXp Xo–

Yp Yo–Zp Zo–

=

Page 644: ERDAS Field Guide

614 Photogrammetric Concepts

Where:

The above equation defines the relationship between the perspective center of the camera/sensor exposure station and ground point P appearing on an image with an image point location of p. This equation forms the basis of the collinearity condition that is used in most photogrammetric operations. The collinearity condition specifies that the exposure station, ground point, and its corresponding image point location must all lie along a straight line, thereby being collinear. Two equations comprise the collinearity condition.

One set of equations can be formulated for each ground point appearing on an image. The collinearity condition is commonly used to define the relationship between the camera/sensor, the image, and the ground.

Photogrammetric Solutions

As stated previously, digital photogrammetry is used for many applications, ranging from orthorectification, automated elevation extraction, stereopair creation, feature collection, highly accurate point determination, and control point extension.For any of the aforementioned tasks to be undertaken, a relationship between the camera/sensor, the image(s) in a project, and the ground must be defined. The following variables are used to define the relationship:

• Exterior orientation parameters for each image

• Interior orientation parameters for each image

a kMA=

xp xo–yp yo–

f–

kMXp Xo–Yp Yo–Zp Zo–

=

xp xo– f–m11 Xp Xo1

)– m12 Yp Yo1)– m13 Zp Zo1

)–(+(+(

m31 Xp Xo1–( ) m32 Yp Yo1

–( ) m33 Zp Zo1–( )+ +

--------------------------------------------------------------------------------------------------------------------=

yp yo– f–m21 Xp Xo1

)– m22 Yp Yo1)– m23 Zp Zo1

)–(+(+(

m31 Xp Xo1–( ) m32 Yp Yo1

–( ) m33 Zp Zo1–( )+ +

--------------------------------------------------------------------------------------------------------------------=

Page 645: ERDAS Field Guide

Photogrammetric Concepts 615

• Accurate representation of the ground

Well-known obstacles in photogrammetry include defining the interior and exterior orientation parameters for each image in a project using a minimum number of GCPs. Due to the costs and labor intensive procedures associated with collecting ground control, most photogrammetric applications do not have an abundant number of GCPs. Additionally, the exterior orientation parameters associated with an image are normally unknown.Depending on the input data provided, photogrammetric techniques such as space resection, space forward intersection, and bundle block adjustment are used to define the variables required to perform orthorectification, automated DEM extraction, stereopair creation, highly accurate point determination, and control point extension.

Space Resection Space resection is a technique that is commonly used to determine the exterior orientation parameters associated with one image or many images based on known GCPs. Space resection is based on the collinearity condition. Space resection using the collinearity condition specifies that, for any image, the exposure station, the ground point, and its corresponding image point must lie along a straight line. If a minimum number of three GCPs is known in the X, Y, and Z direction, space resection techniques can be used to determine the six exterior orientation parameters associated with an image. Space resection assumes that camera information is available.Space resection is commonly used to perform single frame orthorectification, where one image is processed at a time. If multiple images are being used, space resection techniques require that a minimum of three GCPs be located on each image being processed. Using the collinearity condition, the positions of the exterior orientation parameters are computed. Light rays originating from at least three GCPs intersect through the image plane through the image positions of the GCPs and resect at the perspective center of the camera or sensor. Using least squares adjustment techniques, the most probable positions of exterior orientation can be computed. Space resection techniques can be applied to one image or multiple images.

Space Forward Intersection

Space forward intersection is a technique that is commonly used to determine the ground coordinates X, Y, and Z of points that appear in the overlapping areas of two or more images based on known interior orientation and known exterior orientation parameters. The collinearity condition is enforced, stating that the corresponding light rays from the two exposure stations pass through the corresponding image points on the two images and intersect at the same ground point. Figure 197 illustrates the concept associated with space forward intersection.

Page 646: ERDAS Field Guide

616 Photogrammetric Concepts

Figure 197: Space Forward Intersection

Space forward intersection techniques assume that the exterior orientation parameters associated with the images are known. Using the collinearity equations, the exterior orientation parameters along with the image coordinate measurements of point p on Image 1 and Image 2 are input to compute the Xp, Yp, and Zp coordinates of ground point p.

Space forward intersection techniques can be used for applications associated with collecting GCPs, cadastral mapping using airborne surveying techniques, and highly accurate point determination.

Bundle Block Adjustment

For mapping projects having more than two images, the use of space intersection and space resection techniques is limited. This can be attributed to the lack of information required to perform these tasks. For example, it is fairly uncommon for the exterior orientation parameters to be highly accurate for each photograph or image in a project, since these values are generated photogrammetrically. Airborne GPS and INS techniques normally provide initial approximations to exterior orientation, but the final values for these parameters must be adjusted to attain higher accuracies.Similarly, rarely are there enough accurate GCPs for a project of 30 or more images to perform space resection (i.e., a minimum of 90 is required). In the case that there are enough GCPs, the time required to identify and measure all of the points would be costly.

X

Y

Z

Xp

Yp

Zp

Xo1 Yo2

Zo1

Xo2

Yo1

Zo2

O1

o1

O2

o2p1 p2

Ground Point P

Page 647: ERDAS Field Guide

Photogrammetric Concepts 617

The costs associated with block triangulation and orthorectification are largely dependent on the number of GCPs used. To minimize the costs of a mapping project, fewer GCPs are collected and used. To ensure that high accuracies are attained, an approach known as bundle block adjustment is used.A bundle block adjustment is best defined by examining the individual words in the term. A bundled solution is computed including the exterior orientation parameters of each image in a block and the X, Y, and Z coordinates of tie points and adjusted GCPs. A block of images contained in a project is simultaneously processed in one solution. A statistical technique known as least squares adjustment is used to estimate the bundled solution for the entire block while also minimizing and distributing error.Block triangulation is the process of defining the mathematical relationship between the images contained within a block, the camera or sensor model, and the ground. Once the relationship has been defined, accurate imagery and information concerning the Earth’s surface can be created.When processing frame camera, digital camera, videography, and nonmetric camera imagery, block triangulation is commonly referred to as aerial triangulation (AT). When processing imagery collected with a pushbroom sensor, block triangulation is commonly referred to as triangulation.There are several models for block triangulation. The common models used in photogrammetry are block triangulation with the strip method, the independent model method, and the bundle method. Among them, the bundle block adjustment is the most rigorous of the above methods, considering the minimization and distribution of errors. Bundle block adjustment uses the collinearity condition as the basis for formulating the relationship between image space and ground space. IMAGINE LPS Project Manager uses bundle block adjustment techniques.In order to understand the concepts associated with bundle block adjustment, an example comprising two images with three GCPs with X, Y, and Z coordinates that are known is used. Additionally, six tie points are available. Figure 198 illustrates the photogrammetric configuration.

Figure 198: Photogrammetric Configuration

Tie pointGCP

Page 648: ERDAS Field Guide

618 Photogrammetric Concepts

Forming the Collinearity EquationsFor each measured GCP, there are two corresponding image coordinates (x and y). Thus, two collinearity equations can be formulated to represent the relationship between the ground point and the corresponding image measurements. In the context of bundle block adjustment, these equations are known as observation equations.If a GCP has been measured on the overlapping areas of two images, four equations can be written: two for image measurements on the left image comprising the pair and two for the image measurements made on the right image comprising the pair. Thus, GCP A measured on the overlap areas of image left and image right has four collinearity formulas.

One image measurement of GCP A on Image 1:

One image measurement of GCP A on Image 2:

Positional elements of exterior orientation on Image 1:

Positional elements of exterior orientation on Image 2:

xa1xo– f–

m11 XA Xo1)– m12 YA Yo1

)– m13 ZA Zo1)–(+(+(

m31 XA Xo1–( ) m32 YA Yo1

–( ) m33 ZA Zo1–( )+ +

----------------------------------------------------------------------------------------------------------------------=

ya1yo– f–

m21 XA Xo1)– m22 YA Yo1

)– m23 ZA Zo1)–(+(+(

m31 XA Xo1–( ) m32 YA Yo1

–( ) m33 ZA Zo1–( )+ +

----------------------------------------------------------------------------------------------------------------------=

xa2xo– f–

m′11 XA Xo2)– m′12 YA Yo2

)– m′13 ZA Zo2)–(+(+(

m′31 XA Xo2–( ) m′32 YA Yo2

–( ) m′33 ZA Zo2–( )+ +

---------------------------------------------------------------------------------------------------------------------------=

ya2yo– f–

m′21 XA Xo2)– m′22 YA Yo2

)– m′23 ZA Zo2)–(+(+(

m′31 XA Xo2–( ) m′32 YA Yo2

–( ) m′33 ZA Zo2–( )+ +

---------------------------------------------------------------------------------------------------------------------------=

xa1ya1

,

xa2ya2

,

Xo1Yo1

Z, o1,

Page 649: ERDAS Field Guide

Photogrammetric Concepts 619

If three GCPs have been measured on the overlap areas of two images, twelve equations can be formulated, which includes four equations for each GCP (refer to Figure 198 on page 617).Additionally, if six tie points have been measured on the overlap areas of the two images, twenty-four equations can be formulated, which includes four for each tie point. This is a total of 36 observation equations (refer to Figure 198 on page 617). The previous example has the following unknowns:

• Six exterior orientation parameters for the left image (i.e., X, Y, Z, omega, phi, kappa)

• Six exterior orientation parameters for the right image (i.e., X, Y, Z, omega, phi and kappa), and

• X, Y, and Z coordinates of the tie points. Thus, for six tie points, this includes eighteen unknowns (six tie points times three X, Y, Z coordinates).

The total number of unknowns is 30. The overall quality of a bundle block adjustment is largely a function of the quality and redundancy in the input data. In this scenario, the redundancy in the project can be computed by subtracting the number of unknowns, 30, by the number of knowns, 36. The resulting redundancy is six. This term is commonly referred to as the degrees of freedom in a solution.Once each observation equation is formulated, the collinearity condition can be solved using an approach referred to as least squares adjustment.

Least Squares Adjustment

Least squares adjustment is a statistical technique that is used to estimate the unknown parameters associated with a solution while also minimizing error within the solution. With respect to block triangulation, least squares adjustment techniques are used to:

• Estimate or adjust the values associated with exterior orientation

• Estimate the X, Y, and Z coordinates associated with tie points

• Estimate or adjust the values associated with interior orientation

• Minimize and distribute data error through the network of observations

Xo2Yo2

Z, o2,

Page 650: ERDAS Field Guide

620 Photogrammetric Concepts

Data error is attributed to the inaccuracy associated with the input GCP coordinates, measured tie point and GCP image positions, camera information, and systematic errors.The least squares approach requires iterative processing until a solution is attained. A solution is obtained when the residuals associated with the input data are minimized.The least squares approach involves determining the corrections to the unknown parameters based on the criteria of minimizing input measurement residuals. The residuals are derived from the difference between the measured (i.e., user input) and computed value for any particular measurement in a project. In the block triangulation process, a functional model can be formed based upon the collinearity equations.The functional model refers to the specification of an equation that can be used to relate measurements to parameters. In the context of photogrammetry, measurements include the image locations of GCPs and GCP coordinates, while the exterior orientations of all the images are important parameters estimated by the block triangulation process.The residuals, which are minimized, include the image coordinates of the GCPs and tie points along with the known ground coordinates of the GCPs. A simplified version of the least squares condition can be broken down into a formulation as follows:

Where:V = the matrix containing the image coordinate residualsA = the matrix containing the partial derivatives with respect to

the unknown parameters, including exterior orientation, interior orientation, XYZ tie point, and GCP coordinates

X = the matrix containing the corrections to the unknown parameters

L = the matrix containing the input observations (i.e., image coordinates and GCP coordinates)

The components of the least squares condition are directly related to the functional model based on collinearity equations. The A matrix is formed by differentiating the functional model, which is based on collinearity equations, with respect to the unknown parameters such as exterior orientation, etc. The L matrix is formed by subtracting the initial results obtained from the functional model with newly estimated results determined from a new iteration of processing. The X matrix contains the corrections to the unknown exterior orientation parameters. The X matrix is calculated in the following manner:

V AX L , including a weight matrix P–=

X AtPA )(1–AtPL=

Page 651: ERDAS Field Guide

Photogrammetric Concepts 621

Where:X = the matrix containing the corrections to the unknown

parameters tA = the matrix containing the partial derivatives with respect to

the unknown parameterst = the matrix transposedP = the matrix containing the weights of the observationsL = the matrix containing the observations

Once a least squares iteration of processing is completed, the corrections to the unknown parameters are added to the initial estimates. For example, if initial approximations to exterior orientation are provided from Airborne GPS and INS information, the estimated corrections computed from the least squares adjustment are added to the initial value to compute the updated exterior orientation values. This iterative process of least squares adjustment continues until the corrections to the unknown parameters are less than a user-specified threshold (commonly referred to as a convergence value).The V residual matrix is computed at the end of each iteration of processing. Once an iteration is completed, the new estimates for the unknown parameters are used to recompute the input observations such as the image coordinate values. The difference between the initial measurements and the new estimates is obtained to provide the residuals. Residuals provide preliminary indications of the accuracy of a solution. The residual values indicate the degree to which a particular observation (input) fits with the functional model. For example, the image residuals have the capability of reflecting GCP collection in the field. After each successive iteration of processing, the residuals become smaller until they are satisfactorily minimized.Once the least squares adjustment is completed, the block triangulation results include:

• Final exterior orientation parameters of each image in a block and their accuracy

• Final interior orientation parameters of each image in a block and their accuracy

• X, Y, and Z tie point coordinates and their accuracy

• Adjusted GCP coordinates and their residuals

• Image coordinate residuals

The results from the block triangulation are then used as the primary input for the following tasks:

• Stereo pair creation

Page 652: ERDAS Field Guide

622 Photogrammetric Concepts

• Feature collection

• Highly accurate point determination

• DEM extraction

• Orthorectification

Self-calibrating Bundle Adjustment

Normally, there are more or less systematic errors related to the imaging and processing system, such as lens distortion, film distortion, atmosphere refraction, scanner errors, etc. These errors reduce the accuracy of triangulation results, especially in dealing with large-scale imagery and high accuracy triangulation. There are several ways to reduce the influences of the systematic errors, like a posteriori-compensation, test-field calibration, and the most common approach: self-calibration (Konecny and Lehmann, 1984; Wang, Z., 1990). The self-calibrating methods use additional parameters in the triangulation process to eliminate the systematic errors. How well it works depends on many factors such as the strength of the block (overlap amount, crossing flight lines), the GCP and tie point distribution and amount, the size of systematic errors versus random errors, the significance of the additional parameters, the correlation between additional parameters, and other unknowns.There was intensive research and development for additional parameter models in photogrammetry in the 70s and the 80s, and many research results are available (e.g., Bauer and Müller, 1972; Brown 1975; Ebner, 1976; Grün, 1978; Jacobsen, 1980; Jacobsen, 1982; Li, 1985; Wang, Y., 1988a, Stojic et al, 1998). Based on these scientific reports, IMAGINE LPS Project Manager provides four groups of additional parameters for you to choose for different triangulation circumstances. In addition, IMAGINE LPS Project Manager allows the interior orientation parameters to be analytically calibrated within its self-calibrating bundle block adjustment capability.

Automatic Gross Error Detection

Normal random errors are subject to statistical normal distribution. In contrast, gross errors refer to errors that are large and are not subject to normal distribution. The gross errors among the input data for triangulation can lead to unreliable results. Research during the 80s in the photogrammetric community resulted in significant achievements in automatic gross error detection in the triangulation process (e.g., Kubik, 1982; Li, 1983; Li, 1985; Jacobsen, 1984; El-Hakim and Ziemann, 1984; Wang, Y., 1988a). Methods for gross error detection began with residual checking using data-snooping and were later extended to robust estimation (Wang, Z., 1990). The most common robust estimation method is the iteration with selective weight functions. Based on the scientific research results from the photogrammetric community, IMAGINE LPS Project Manager offers two robust error detection methods within the triangulation process.

Page 653: ERDAS Field Guide

Photogrammetric Concepts 623

It is worth mentioning that the effect of the automatic error detection depends not only on the mathematical model, but also depends on the redundancy in the block. Therefore, more tie points in more overlap areas contribute better gross error detection. In addition, inaccurate GCPs can distribute their errors to correct tie points, therefore the ground and image coordinates of GCPs should have better accuracy than tie points when comparing them within the same scale space.

GCPs The instrumental component of establishing an accurate relationship between the images in a project, the camera/sensor, and the ground is GCPs. GCPs are identifiable features located on the Earth’s surface that have known ground coordinates in X, Y, and Z. A full GCP has X,Y, and Z (elevation of the point) coordinates associated with it. Horizontal control only specifies the X,Y, while vertical control only specifies the Z. The following features on the Earth’s surface are commonly used as GCPs:

• Intersection of roads

• Utility infrastructure (e.g., fire hydrants and manhole covers)

• Intersection of agricultural plots of land

• Survey benchmarks

Depending on the type of mapping project, GCPs can be collected from the following sources:

• Theodolite survey (millimeter to centimeter accuracy)

• Total station survey (millimeter to centimeter accuracy)

• Ground GPS (centimeter to meter accuracy)

• Planimetric and topographic maps (accuracy varies as a function of map scale, approximate accuracy between several meters to 40 meters or more)

• Digital orthorectified images (X and Y coordinates can be collected to an accuracy dependent on the resolution of the orthorectified image)

• DEMs (for the collection of vertical GCPs having Z coordinates associated with them, where accuracy is dependent on the resolution of the DEM and the accuracy of the input DEM)

Page 654: ERDAS Field Guide

624 Photogrammetric Concepts

When imagery or photography is exposed, GCPs are recorded and subsequently displayed on the photography or imagery. During GCP measurement in IMAGINE LPS Project Manager, the image positions of GCPs appearing on an image or on the overlap areas of the images are collected.It is highly recommended that a greater number of GCPs be available than are actually used in the block triangulation. Additional GCPs can be used as check points to independently verify the overall quality and accuracy of the block triangulation solution. A check point analysis compares the photogrammetrically computed ground coordinates of the check points to the original values. The result of the analysis is an RMSE that defines the degree of correspondence between the computed values and the original values. Lower RMSE values indicate better results.

GCP Requirements The minimum GCP requirements for an accurate mapping project vary with respect to the size of the project. With respect to establishing a relationship between image space and ground space, the theoretical minimum number of GCPs is two GCPs having X, Y, and Z coordinates and one GCP having a Z coordinate associated with it. This is a total of seven observations. In establishing the mathematical relationship between image space and object space, seven parameters defining the relationship must be determined. The seven parameters include a scale factor (describing the scale difference between image space and ground space); X, Y, Z (defining the positional differences between image space and object space); and three rotation angles (omega, phi, and kappa) that define the rotational relationship between image space and ground space. In order to compute a unique solution, at least seven known parameters must be available. In using the two X, Y, Z GCPs and one vertical (Z) GCP, the relationship can be defined. However, to increase the accuracy of a mapping project, using more GCPs is highly recommended.The following descriptions are provided for various projects:

Processing One Image If processing one image for the purpose of orthorectification (i.e., a single frame orthorectification), the minimum number of GCPs required is three. Each GCP must have an X, Y, and Z coordinate associated with it. The GCPs should be evenly distributed to ensure that the camera/sensor is accurately modeled.

Page 655: ERDAS Field Guide

Photogrammetric Concepts 625

Processing a Strip of Images If processing a strip of adjacent images, two GCPs for every third image is recommended. To increase the quality of orthorectification, measuring three GCPs at the corner edges of a strip is advantageous. Thus, during block triangulation a stronger geometry can be enforced in areas where there is less redundancy such as the corner edges of a strip or a block. Figure 199 illustrates the GCP configuration for a strip of images having 60% overlap. The triangles represent the GCPs. Thus, the image positions of the GCPs are measured on the overlap areas of the imagery.

Figure 199: GCP Configuration

Processing Multiple Strips of Imagery

Figure 200 depicts the standard GCP configuration for a block of images, comprising four strips of images, each containing eight overlapping images.

Figure 200: GCPs in a Block of Images

Page 656: ERDAS Field Guide

626 Photogrammetric Concepts

In this case, the GCPs form a strong geometric network of observations. As a general rule, it is advantageous to have at least one GCP on every third image of a block. Additionally, whenever possible, locate GCPs that lie on multiple images, around the outside edges of a block, and at certain distances from one another within the block.

Tie Points A tie point is a point that has ground coordinates that are not known, but is visually recognizable in the overlap area between two or more images. The corresponding image positions of tie points appearing on the overlap areas of multiple images is identified and measured. Ground coordinates for tie points are computed during block triangulation. Tie points can be measured both manually and automatically.Tie points should be visually well-defined in all images. Ideally, they should show good contrast in two directions, like the corner of a building or a road intersection. Tie points should also be well distributed over the area of the block. Typically, nine tie points in each image are adequate for block triangulation. Figure 201 depicts the placement of tie points.

Figure 201: Point Distribution for Triangulation

In a block of images with 60% overlap and 25-30% sidelap, nine points are sufficient to tie together the block as well as individual strips (see Figure 202).

Tie points in asingle image

y

x

Page 657: ERDAS Field Guide

Photogrammetric Concepts 627

Figure 202: Tie Points in a Block

Automatic Tie Point Collection

Selecting and measuring tie points is very time-consuming and costly. Therefore, in recent years, one of the major focal points of research and development in photogrammetry has concentrated on the automated triangulation where the automatic tie point collection is the main issue. The other part of the automated triangulation is the automatic control point identification, which is still unsolved due to the complication of the scenario. There are several valuable research results available for automated triangulation (e.g., Agouris and Schenk, 1996; Heipke, 1996; Krzystek, 1998; Mayr, 1995; Schenk, 1997; Tang et al, 1997; Tsingas, 1995; Wang, Y., 1998b). After investigating the advantages and the weaknesses of the existing methods, IMAGINE LPS Project Manager was designed to incorporate an advanced method for automatic tie point collection. It is designed to work with a variety of digital images such as aerial images, satellite images, digital camera images, and close range images. It also supports the processing of multiple strips including adjacent, diagonal, and cross-strips. Automatic tie point collection within IMAGINE LPS Project Manager successfully performs the following tasks:

• Automatic block configuration. Based on the initial input requirements, IMAGINE LPS Project Manager automatically detects the relationship of the block with respect to image adjacency.

• Automatic tie point extraction. The feature point extraction algorithms are used here to extract the candidates of tie points.

• Point transfer. Feature points appearing on multiple images are automatically matched and identified.

• Gross error detection. Erroneous points are automatically identified and removed from the solution.

• Tie point selection. The intended number of tie points defined by you is automatically selected as the final number of tie points.

Nine tie points ineach image tie theblock together

Tie points

Page 658: ERDAS Field Guide

628 Photogrammetric Concepts

The image matching strategies incorporated in IMAGINE LPS Project Manager for automatic tie point collection include the coarse-to-fine matching; feature-based matching with geometrical and topological constraints, which is simplified from the structural matching algorithm (Wang, Y., 1998b); and the least square matching for the high accuracy of tie points.

Image Matching Techniques

Image matching refers to the automatic identification and measurement of corresponding image points that are located on the overlapping area of multiple images. The various image matching methods can be divided into three categories including:

• Area based matching

• Feature based matching

• Relation based matching

Area Based Matching Area based matching is also called signal based matching. This method determines the correspondence between two image areas according to the similarity of their gray level values. The cross correlation and least squares correlation techniques are well-known methods for area based matching.

Correlation Windows Area based matching uses correlation windows. These windows consist of a local neighborhood of pixels. One example of correlation windows is square neighborhoods (for example, 3 × 3, 5 × 5, 7 × 7 pixels). In practice, the windows vary in shape and dimension based on the matching technique. Area correlation uses the characteristics of these windows to match ground feature locations in one image to ground features on the other.A reference window is the source window on the first image, which remains at a constant location. Its dimensions are usually square in size (for example, 3 × 3, 5 × 5, and so on). Search windows are candidate windows on the second image that are evaluated relative to the reference window. During correlation, many different search windows are examined until a location is found that best matches the reference window.

Page 659: ERDAS Field Guide

Photogrammetric Concepts 629

Correlation Calculations Two correlation calculations are described below: cross correlation and least squares correlation. Most area based matching calculations, including these methods, normalize the correlation windows. Therefore, it is not necessary to balance the contrast or brightness prior to running correlation. Cross correlation is more robust in that it requires a less accurate a priori position than least squares. However, its precision is limited to one pixel. Least squares correlation can achieve precision levels of one-tenth of a pixel, but requires an a priori position that is accurate to about two pixels. In practice, cross correlation is often followed by least squares for high accuracy.

Cross Correlation Cross correlation computes the correlation coefficient of the gray values between the template window and the search window according to the following equation:

Where: ρ = the correlation coefficientg(c,r) = the gray value of the pixel (c,r) c1,r1 = the pixel coordinates on the left imagec2,r2 = the pixel coordinates on the right imagen = the total number of pixels in the windowi, j = pixel index into the correlation window

When using the area based cross correlation, it is necessary to have a good initial position for the two correlation windows. If the exterior orientation parameters of the images being matched are known, a good initial position can be determined. Also, if the contrast in the windows is very poor, the correlation can fail.

ρ

g1 c1 r1,( ) g1–[ ] g2 c2 r2,( ) g2–[ ]i j,∑

g1 c1 r1,( ) g1–[ ]2 g2 c2 r2,( ) g2–[ ]i j,∑

2

i j,∑

-------------------------------------------------------------------------------------------------------=

with

g11n--- g1 c1 r1,( )

i j,∑= g2

1n--- g2 c2 r2,( )

i j,∑=

Page 660: ERDAS Field Guide

630 Photogrammetric Concepts

Least Squares Correlation Least squares correlation uses the least squares estimation to derive parameters that best fit a search window to a reference window. This technique has been investigated thoroughly in photogrammetry (Ackermann, 1983; Grün and Baltsavias, 1988; Helava, 1988). It accounts for both gray scale and geometric differences, making it especially useful when ground features on one image look somewhat different on the other image (differences which occur when the surface terrain is quite steep or when the viewing angles are quite different). Least squares correlation is iterative. The parameters calculated during the initial pass are used in the calculation of the second pass and so on, until an optimum solution is determined. Least squares matching can result in high positional accuracy (about 0.1 pixels). However, it is sensitive to initial approximations. The initial coordinates for the search window prior to correlation must be accurate to about two pixels or better. When least squares correlation fits a search window to the reference window, both radiometric (pixel gray values) and geometric (location, size, and shape of the search window) transformations are calculated. For example, suppose the change in gray values between two correlation windows is represented as a linear relationship. Also assume that the change in the window’s geometry is represented by an affine transformation.

Where:c1,r1 = the pixel coordinate in the reference windowc2,r2 = the pixel coordinate in the search windowg1(c1,r1) = the gray value of pixel (c1,r1)g2(c2,r2) = the gray value of pixel (c1,r1)h0, h1 = linear gray value transformation parametersa0, a1, a2 = affine geometric transformation parametersb0, b1, b2 = affine geometric transformation parameters

Based on this assumption, the error equation for each pixel is derived, as shown in the following equation:

g2 c2 r2,( ) h0 h1g1 c1 r1,( )+=

c2 a0 a1c1 a2r1+ +=

r2 b0 b1c1 b2r1+ +=

Page 661: ERDAS Field Guide

Photogrammetric Concepts 631

Where:gc and gr are the gradients of g2 (c2,r2).

Feature Based Matching Feature based matching determines the correspondence between two image features. Most feature based techniques match extracted point features (this is called feature point matching), as opposed to other features, such as lines or complex objects. The feature points are also commonly referred to as interest points. Poor contrast areas can be avoided with feature based matching. In order to implement feature based matching, the image features must initially be extracted. There are several well-known operators for feature point extraction. Examples include the Moravec Operator, the Dreschler Operator, and the Förstner Operator (Förstner and Gülch, 1987; Lü, 1988).After the features are extracted, the attributes of the features are compared between two images. The feature pair having the attributes with the best fit is recognized as a match. IMAGINE LPS Project Manager utilizes the Förstner interest operator to extract feature points.

Relation Based Matching Relation based matching is also called structural matching (Vosselman and Haala, 1992; Wang, Y., 1994; and Wang, Y., 1995). This kind of matching technique uses the image features and the relationship between the features. With relation based matching, the corresponding image structures can be recognized automatically, without any a priori information. However, the process is time-consuming since it deals with varying types of information. Relation based matching can also be applied for the automatic recognition of control points.

Image Pyramid Because of the large amount of image data, the image pyramid is usually adopted during the image matching techniques to reduce the computation time and to increase the matching reliability. The pyramid is a data structure consisting of the same image represented several times, at a decreasing spatial resolution each time. Each level of the pyramid contains the image at a particular resolution. The matching process is performed at each level of resolution. The search is first performed at the lowest resolution level and subsequently at each higher level of resolution. Figure 203 shows a four-level image pyramid.

a1 a2c1 a3r1+ +( )gc b1 b2c1 b3r1+ +( )gr h1– h2g1 c1 r1,( )– Δ+ +=

th Δg g2 c2 r2,( ) g1 c1 r1,( )–=

Page 662: ERDAS Field Guide

632 Photogrammetric Concepts

Figure 203: Image Pyramid for Matching at Coarse to Full Resolution

There are different resampling methods available for generating an image pyramid. Theoretical and practical investigations show that the resampling methods based on the Gaussian filter, which are approximated by a binomial filter, have the superior properties concerning preserving the image contents and reducing the computation time (Wang, Y., 1994). The Compute Pyramid Layers option in IMAGINE has three options for continuous image data (raster images); 2x2, 3x3, or 4x4 kernel size filtering methods. The 3x3 option, known as Binomial Interpolation, is strongly recommended for LPS and Stereo Analyst modules. Binomial Interpolation uses calculations of 9 pixels in a 3x3 pixel window of the higher resolution level and applies the result to one pixel for the current level of pyramid. As noted, resampling methods based on the Gaussian filter have superior properties concerning preserving the image contents and reduction of computation time, however, the Gaussian filter method is sophisticated and very time-consuming. Therefore in practice, binomial filters are used to approximate the Gaussian filter. The 3x3 binomial filter can be represented as:

An advantage of the Binomial filter is fast and simple computation, because a two-dimensional binomial filter can be decomposed to two one-dimensional filters and finally to simple addition operations and shift.

Level 2

Full resolution (1:1)

256 x 256 pixels

Level 1

512 x 512 pixels

Level 3128 x 128 pixels

Level 464 x 64 pixels

Matching begins

and

Matching finishes

Resolution of 1:8

Resolution of 1:4

Resolution of 1:2

on level 4

on level 1

116------

1 2 12 4 21 2 1

Page 663: ERDAS Field Guide

Photogrammetric Concepts 633

For some photogrammetric processes such as automatic tie point collection and automatic DEM extraction, the pyramid layers will be used to reduce computation time and increase reliability. These processes require that the pyramid layers fade out the detailed features, but retain the main features. The binomial filter meets this requirement, producing a moderate degree of image smoothing.In contrast, while the 2x2 kernel filter produces good results for image visual observation, it can smooth or sharpen the image considerably, causing more detail to be lost than desired. The Binomial Interpolation method preserves the original image information more effectively since the nature of optical imaging and resampling theory are considered in the calculations. Detailed image features are gradually faded out while significant image features are retained, meeting requirements for fast and reliable image matching processes. The computation speed for this method is similar to 2x2 kernel.(Wang, Y. and Yang, X. 1997)

See Pyramid Layers on page 162 for a description of 2x2 and 4x4 kernel sizes.

Satellite Photogrammetry

Satellite photogrammetry has slight variations compared to photogrammetric applications associated with aerial frame cameras. This document makes reference to the SPOT and IRS-1C satellites. The SPOT satellite provides 10-meter panchromatic imagery and 20-meter multispectral imagery (four multispectral bands of information).The SPOT satellite carries two high resolution visible (HRV) sensors, each of which is a pushbroom scanner that takes a sequence of line images while the satellite circles the Earth. The focal length of the camera optic is 1084 mm, which is very large relative to the length of the camera (78 mm). The field of view is 4.1 degrees. The satellite orbit is circular, North-South and South-North, about 830 km above the Earth, and sun-synchronous. A sun-synchronous orbit is one in which the orbital rotation is the same rate as the Earth’s rotation. The Indian Remote Sensing (IRS-1C) satellite utilizes a pushbroom sensor consisting of three individual CCDs. The ground resolution of the imagery ranges between 5 to 6 meters. The focal length of the optic is approximately 982 mm. The pixel size of the CCD is 7 microns. The images captured from the three CCDs are processed independently or merged into one image and system corrected to account for the systematic error associated with the sensor.

Page 664: ERDAS Field Guide

634 Photogrammetric Concepts

Both the SPOT and IRS-1C satellites collect imagery by scanning along a line. This line is referred to as the scan line. For each line scanned within the SPOT and IRS-1C sensors, there is a unique perspective center and a unique set of rotation angles. The location of the perspective center relative to the line scanner is constant for each line (interior orientation and focal length). Since the motion of the satellite is smooth and practically linear over the length of a scene, the perspective centers of all scan lines of a scene are assumed to lie along a smooth line. Figure 204 illustrates the scanning technique.

Figure 204: Perspective Centers of SPOT Scan Lines

The satellite exposure station is defined as the perspective center in ground coordinates for the center scan line. The image captured by the satellite is called a scene. For example, a SPOT Pan 1A scene is composed of 6000 lines. For SPOT Pan 1A imagery, each of these lines consists of 6000 pixels. Each line is exposed for 1.5 milliseconds, so it takes 9 seconds to scan the entire scene. (A scene from SPOT XS 1A is composed of only 3000 lines and 3000 columns and has 20-meter pixels, while Pan has 10-meter pixels.)

NOTE: The following section addresses only the 10 meter SPOT Pan scenario.

A pixel in the SPOT image records the light detected by one of the 6000 light sensitive elements in the camera. Each pixel is defined by file coordinates (column and row numbers). The physical dimension of a single, light-sensitive element is 13 ×13 microns. This is the pixel size in image coordinates. The center of the scene is the center pixel of the center scan line. It is the origin of the image coordinate system. Figure 205 depicts image coordinates in a satellite scene:

perspective centers of scan linesmotion of satellite

scan lineson image

ground

Page 665: ERDAS Field Guide

Photogrammetric Concepts 635

Figure 205: Image Coordinates in a Satellite Scene

Where:A = origin of file coordinatesA-XF, A-YF = file coordinate axesC = origin of image coordinates (center of scene)C-x, C-y = image coordinate axes

SPOT Interior Orientation Figure 206 shows the interior orientation of a satellite scene. The transformation between file coordinates and image coordinates is constant.

y

x

A XF

YF

C6000 lines

6000 pixels

Page 666: ERDAS Field Guide

636 Photogrammetric Concepts

Figure 206: Interior Orientation of a SPOT Scene

For each scan line, a separate bundle of light rays is defined, where:Pk = image pointxk = x value of image coordinates for scan line kf = focal length of the cameraOk = perspective center for scan line k, aligned along the orbit PPk = principal point for scan line klk = light rays for scan line, bundled at perspective center Ok

SPOT Exterior Orientation

SPOT satellite geometry is stable and the sensor parameters, such as focal length, are well-known. However, the triangulation of SPOT scenes is somewhat unstable because of the narrow, almost parallel bundles of light rays.

P1

xk

O1

Ok

On

PPn

PP1

Pk

Pn xn

x1

f

f

f

l1

lk

lnP1

PPk

(N —> S)orbiting direction

scan lines(image plane)

Page 667: ERDAS Field Guide

Photogrammetric Concepts 637

Ephemeris data for the orbit are available in the header file of SPOT scenes. They give the satellite’s position in three-dimensional, geocentric coordinates at 60-second increments. The velocity vector and some rotational velocities relating to the attitude of the camera are given, as well as the exact time of the center scan line of the scene. The header of the data file of a SPOT scene contains ephemeris data, which provides information about the recording of the data and the satellite orbit. Ephemeris data that can be used in satellite triangulation include:

• Position of the satellite in geocentric coordinates (with the origin at the center of the Earth) to the nearest second

• Velocity vector, which is the direction of the satellite’s travel

• Attitude changes of the camera

• Time of exposure (exact) of the center scan line of the scene

The geocentric coordinates included with the ephemeris data are converted to a local ground system for use in triangulation. The center of a satellite scene is interpolated from the header data. Light rays in a bundle defined by the SPOT sensor are almost parallel, lessening the importance of the satellite’s position. Instead, the inclination angles (incidence angles) of the cameras on board the satellite become the critical data.The scanner can produce a nadir view. Nadir is the point directly below the camera. SPOT has off-nadir viewing capability. Off-nadir refers to any point that is not directly beneath the satellite, but is off to an angle (i.e., East or West of the nadir).A stereo scene is achieved when two images of the same area are acquired on different days from different orbits, one taken East of the other. For this to occur, there must be significant differences in the inclination angles. Inclination is the angle between a vertical on the ground at the center of the scene and a light ray from the exposure station. This angle defines the degree of off-nadir viewing when the scene was recorded. The cameras can be tilted in increments of a minimum of 0.6 to a maximum of 27 degrees to the East (negative inclination) or West (positive inclination). Figure 207 illustrates the inclination.

Page 668: ERDAS Field Guide

638 Photogrammetric Concepts

Figure 207: Inclination of a Satellite Stereo-Scene (View from North to South)

Where:C = center of the sceneI- = eastward inclination I+ = westward inclination O1,O2 = exposure stations (perspective centers of imagery)

The orientation angle of a satellite scene is the angle between a perpendicular to the center scan line and the North direction. The spatial motion of the satellite is described by the velocity vector. The real motion of the satellite above the ground is further distorted by the Earth’s rotation.The velocity vector of a satellite is the satellite’s velocity if measured as a vector through a point on the spheroid. It provides a technique to represent the satellite’s speed as if the imaged area were flat instead of being a curved surface (see Figure 208).

CEarth’s surface

orbit 2orbit 1

I -

I +

EAST WEST

sensors

O1 O2

scene coverage

vertical

(ellipsoid)

Page 669: ERDAS Field Guide

Photogrammetric Concepts 639

Figure 208: Velocity Vector and Orientation Angle of a Single Scene

Where:O = orientation angleC = center of the sceneV = velocity vector

Satellite block triangulation provides a model for calculating the spatial relationship between a satellite sensor and the ground coordinate system for each line of data. This relationship is expressed as the exterior orientation, which consists of

• the perspective center of the center scan line (i.e., X, Y, and Z),

• the change of perspective centers along the orbit,

• the three rotations of the center scan line (i.e., omega, phi, and kappa), and

• the changes of angles along the orbit.

In addition to fitting the bundle of light rays to the known points, satellite block triangulation also accounts for the motion of the satellite by determining the relationship of the perspective centers and rotation angles of the scan lines. It is assumed that the satellite travels in a smooth motion as a scene is being scanned. Therefore, once the exterior orientation of the center scan line is determined, the exterior orientation of any other scan line is calculated based on the distance of that scan line from the center, and the changes of the perspective center location and rotation angles.

center scan line

orbital path V

C

O

North

Page 670: ERDAS Field Guide

640 Photogrammetric Concepts

Bundle adjustment for triangulating a satellite scene is similar to the bundle adjustment used for aerial images. A least squares adjustment is used to derive a set of parameters that comes the closest to fitting the control points to their known ground coordinates, and to intersecting tie points. The resulting parameters of satellite bundle adjustment are:

• Ground coordinates of the perspective center of the center scan line

• Rotation angles for the center scan line

• Coefficients, from which the perspective center and rotation angles of all other scan lines are calculated

• Ground coordinates of all tie points

Collinearity Equations & Satellite Block Triangulation

Modified collinearity equations are used to compute the exterior orientation parameters associated with the respective scan lines in the satellite scenes. Each scan line has a unique perspective center and individual rotation angles. When the satellite moves from one scan line to the next, these parameters change. Due to the smooth motion of the satellite in orbit, the changes are small and can be modeled by low order polynomial functions.

Control for Satellite Block TriangulationBoth GCPs and tie points can be used for satellite block triangulation of a stereo scene. For triangulating a single scene, only GCPs are used. In this case, space resection techniques are used to compute the exterior orientation parameters associated with the satellite as they existed at the time of image capture. A minimum of six GCPs is necessary. Ten or more GCPs are recommended to obtain a good triangulation result. The best locations for GCPs in the scene are shown below in Figure 209.

Page 671: ERDAS Field Guide

Photogrammetric Concepts 641

Figure 209: Ideal Point Distribution Over a Satellite Scene for Triangulation

Orthorectification As stated previously, orthorectification is the process of removing geometric errors inherent within photography and imagery. The variables contributing to geometric errors include, but are not limited to:

• Camera and sensor orientation

• Systematic error associated with the camera or sensor

• Topographic relief displacement

• Earth curvature

By performing block triangulation or single frame resection, the parameters associated with camera and sensor orientation are defined. Utilizing least squares adjustment techniques during block triangulation minimizes the errors associated with camera or sensor instability. Additionally, the use of self-calibrating bundle adjustment (SCBA) techniques along with Additional Parameter (AP) modeling accounts for the systematic errors associated with camera interior geometry. The effects of the Earth’s curvature are significant if a large photo block or satellite imagery is involved. They are accounted for during the block triangulation procedure by setting the relevant option. The effects of topographic relief displacement are accounted for by utilizing a DEM during the orthorectification procedure.The orthorectification process takes the raw digital imagery and applies a DEM and triangulation results to create an orthorectified image. Once an orthorectified image is created, each pixel within the image possesses geometric fidelity. Thus, measurements taken off an orthorectified image represent the corresponding measurements as if they were taken on the Earth’s surface (see Figure 210).

y

xhorizontalscan lines

GCP

Page 672: ERDAS Field Guide

642 Photogrammetric Concepts

Figure 210: Orthorectification

An image or photograph with an orthographic projection is one for which every point looks as if an observer were looking straight down at it, along a line of sight that is orthogonal (perpendicular) to the Earth. The resulting orthorectified image is known as a digital orthoimage (see Figure 211).Relief displacement is corrected by taking each pixel of a DEM and finding the equivalent position in the satellite or aerial image. A brightness value is determined for this location based on resampling of the surrounding pixels. The brightness value, elevation, and exterior orientation information are used to calculate the equivalent location in the orthoimage file.

Figure 211: Digital Orthophoto—Finding Gray Values

Image

DEM

Orthorectified image

DTM

orthoimage grayvalues

Z

X

Plf

P

O

Page 673: ERDAS Field Guide

Photogrammetric Concepts 643

Where:P = ground pointP1 = image pointO = perspective center (origin)X,Z = ground coordinates (in DTM file)f = focal length

In contrast to conventional rectification techniques, orthorectification relies on the digital elevation data, unless the terrain is flat. Various sources of elevation data exist, such as the USGS DEM and a DEM automatically created from stereo image pairs. They are subject to data uncertainty, due in part to the generalization or imperfections in the creation process. The quality of the digital orthoimage is significantly affected by this uncertainty. For different image data, different accuracy levels of DEMs are required to limit the uncertainty-related errors within a controlled limit. While the near-vertical viewing SPOT scene can use very coarse DEMs, images with large incidence angles need better elevation data such as USGS level-1 DEMs. For aerial photographs with a scale larger than 1:60000, elevation data accurate to 1 meter is recommended. The 1 meter accuracy reflects the accuracy of the Z coordinates in the DEM, not the DEM resolution or posting.

Detailed discussion of DEM requirements for orthorectification can be found in Yang and Williams (Yang and Williams, 1997). See Bibliography.

Resampling methods used are nearest neighbor, bilinear interpolation, and cubic convolution. Generally, when the cell sizes of orthoimage pixels are selected, they should be similar or larger than the cell sizes of the original image. For example, if the image was scanned at 25 microns (1016 dpi) producing an image of 9K × 9K pixels, one pixel would represent 0.025 mm on the image. Assuming that the image scale of this photo is 1:40000, then the cell size on the ground is about 1 m. For the orthoimage, it is appropriate to choose a pixel spacing of 1 m or larger. Choosing a smaller pixel size oversamples the original image.

For information, see the scanning resolutions table, Table 52 on page 602.

For SPOT Pan images, a cell size of 10 meters is appropriate. Any further enlargement from the original scene to the orthophoto does not improve the image detail. For IRS-1C images, a cell size of 6 meters is appropriate.

Page 674: ERDAS Field Guide

644 Photogrammetric Concepts

Page 675: ERDAS Field Guide

645Terrain Analysis

Terrain Analysis 645

Terrain Analysis

Introduction Terrain analysis involves the processing and graphic simulation of elevation data. Terrain analysis software functions usually work with topographic data (also called terrain data or elevation data), in which an elevation (or Z value) is recorded at each X,Y location. However, terrain analysis functions are not restricted to topographic data. Any series of values, such as population densities, ground water pressure values, magnetic and gravity measurements, and chemical concentrations, may be used.Topographic data are essential for studies of trafficability, route design, nonpoint source pollution, intervisibility, siting of recreation areas, etc. (Welch, 1990). Especially useful are products derived from topographic data. These include:

• slope images—illustrates changes in elevation over distance. Slope images are usually color-coded according to the steepness of the terrain at each pixel.

• aspect images—illustrates the prevailing direction that the slope faces at each pixel.

• shaded relief images—illustrates variations in terrain by differentiating areas that would be illuminated or shadowed by a light source simulating the sun.

Topographic data and its derivative products have many applications, including:

• calculating the shortest and most navigable path over a mountain range for constructing a road or routing a transmission line

• determining rates of snow melt based on variations in sun shadow, which is influenced by slope, aspect, and elevation

Terrain data are often used as a component in complex GIS modeling or classification routines. They can, for example, be a key to identifying wildlife habitats that are associated with specific elevations. Slope and aspect images are often an important factor in assessing the suitability of a site for a proposed use. Terrain data can also be used for vegetation classification based on species that are terrain-sensitive (e.g., Alpine vegetation).

Page 676: ERDAS Field Guide

646 Terrain Analysis

Although this chapter mainly discusses the use of topographic data, the ERDAS IMAGINE terrain analysis functions can be used on data types other than topographic data.

See "Geographic Information Systems" on page 173 for more information about GIS modeling.

Terrain Data Terrain data are usually expressed as a series of points with X,Y, and Z values. When terrain data are collected in the field, they are surveyed at a series of points including the extreme high and low points of the terrain along features of interest that define the topography such as streams and ridge lines, and at various points in between. DEM and DTED are expressed as regularly spaced points. To create DEM and DTED files, a regular grid is overlaid on the topographic contours. Elevations are read at each grid intersection point, as shown in Figure 212.

Figure 212: Regularly Spaced Terrain Data Points

Elevation data are derived from ground surveys and through manual photogrammetric methods. Elevation points can also be generated through digital orthographic methods.

See "Raster and Vector Data Sources" on page 55 for more details on DEM and DTED data. See "Photogrammetric Concepts" on page 595 for more information on the digital orthographic process.

Topographic image withgrid overlay

DEM or regularly spacedterrain data points (Z values)

20

30

40

5030

20

20 22 29 34

31 39 38 34

45 48 41 30

Page 677: ERDAS Field Guide

Terrain Analysis 647

To make topographic data usable in ERDAS IMAGINE, they must be represented as a surface, or DEM. A DEM is a one-band image file where the value of each pixel is a specific elevation value. A gray scale is used to differentiate variations in terrain.

DEMs can be edited with the Raster Editing capabilities of ERDAS IMAGINE. See "Raster Data" on page 1 for more information.

Slope Images Slope is expressed as the change in elevation over a certain distance. In this case, the certain distance is the size of the pixel. Slope is most often expressed as a percentage, but can also be calculated in degrees.

Use the Slope function in Image Interpreter to generate a slope image.

In ERDAS IMAGINE, the relationship between percentage and degree expressions of slope is as follows:

• a 45° angle is consid‘ered a 100% slope

• a 90° angle is considered a 200% slope

• slopes less than 45° fall within the 1 - 100% range

• slopes between 45° and 90° are expressed as 100 - 200% slopes

A 3 × 3 pixel window is used to calculate the slope at each pixel. For a pixel at location X,Y, the elevations around it are used to calculate the slope as shown in Figure 213. In this figure, each pixel has a ground resolution of 30 × 30 meters.

Page 678: ERDAS Field Guide

648 Terrain Analysis

Figure 213: 3 × 3 Window Calculates the Slope at Each Pixel

First, the average elevation changes per unit of distance in the x and y direction (Δx and Δy) are calculated as:

Where:a...i = elevation values of pixels in a 3 × 3 window, as shown

abovexs = x pixel size = 30 metersys = y pixel size = 30 meters

The slope at pixel x,y is calculated as:

a b c

d e f

g h i

Pixel X,Y has

a,b,c,d,f,g,h, and i are the elevations of

elevation e.

the pixels around it in a 3 X 3 window.

10 m 20 m 25 m

22 m 30 m 25 m

20 m 24 m 18 m

x1Δ c a–=

x2Δ f d–=x3Δ i g–=

y1Δ a g–=

y2Δ b h–=y3Δ c i–=

Δx Δx1 Δx2 Δx3+ +( ) 3⁄ xs×=

Δy Δy1 Δy2 Δy3+ +( ) 3⁄ ys×=

s xΔ( )2 yΔ( )2+2

--------------------------------------= s 0.0967=

if s 1≤ percent slope s 100×=

slope in degrees s( )tan 1– 180π

-------×=

percent slope 200 100s

-------–=if s 1>

Page 679: ERDAS Field Guide

Terrain Analysis 649

ExampleSlope images are often used in road planning. For example, if the Department of Transportation specifies a maximum of 15% slope on any road, it would be possible to recode all slope values that are greater than 15% as unsuitable for road building.A hypothetical example is given in Figure 214, which shows how the slope is calculated for a single pixel.

Figure 214: Slope Calculation Example

So, for the hypothetical example:

For the example, the slope is:

10 m 20 m 25 m

22 m 25 m

20 m 24 m 18 m

The pixel for which slope is being calculated is shaded.The elevations of the neighboring pixels are given in meters.

Δx1 25 10– 15= =Δx2 25 22– 3= =Δx3 18 20– 2–= =

Δy1 10 20– 10–= =Δy2 20 24– 4–= =Δy3 25 18– 7= =

Δx 15 3 2–+30 3×

---------------------- 0.177= = Δy 10– 4– 7+30 3×

-------------------------- 0.078–= =

slope in degrees s( )tan 1– 180π

-------× 0.0967( ) 57.30×tan 1– 5.54= = =percent slope 0.0967 100× 9.67%= =

Page 680: ERDAS Field Guide

650 Terrain Analysis

Aspect Images An aspect image is an image file that is gray scale coded according to the prevailing direction of the slope at each pixel. Aspect is expressed in degrees from north, clockwise, from 0 to 360. Due north is 0 degrees. A value of 90 degrees is due east, 180 degrees is due south, and 270 degrees is due west. A value of 361 degrees is used to identify flat surfaces such as water bodies.

Use the Aspect function in Image Interpreter to generate an aspect image.

As with slope calculations, aspect uses a 3 × 3 window around each pixel to calculate the prevailing direction it faces. For pixel x,y with the following elevation values around it, the average changes in elevation in both x and y directions are calculated first. Each pixel is 30 × 30 meters in the following example:

Figure 215: 3 × 3 Window Calculates the Aspect at Each Pixel

Where:a...i = elevation values of pixels in a 3 × 3 window as shown

above

If Δx = 0 and Δy = 0, then the aspect is flat (coded to 361 degrees). Otherwise,θ is calculated as:

a b c

d e f

g h i

Pixel X,Y has

a,b,c,d,f,g,h, and i are the elevations of

elevation e.

the pixels around it in a 3 × 3 window.

x1Δ c a–=x2Δ f d–=x3Δ i g–=

y1Δ a g–=y2Δ b h–=y3Δ c i–=

Δx Δx1 Δx2 Δx3+ +( ) 3⁄=

Δy Δy1 Δy2 Δy3+ +( ) 3⁄=

Page 681: ERDAS Field Guide

Terrain Analysis 651

Note that θ is calculated in radians.

Then, aspect is 180 + θ (in degrees).

ExampleAspect files are used in many of the same applications as slope files. In transportation planning, for example, north facing slopes are often avoided. Especially in northern climates, these would be exposed to the most severe weather and would hold snow and ice the longest. It would be possible to recode all pixels with north facing aspects as undesirable for road building.A hypothetical example is given in Figure 216, which shows how the aspect is calculated for a single pixel.

Figure 216: Aspect Calculation Example

1.98 radians = 113.6 degreesaspect = 180 + 113.6 = 293.6 degrees

θ ΔxΔy------⎝ ⎠

⎛ ⎞tan 1–=

10 m 20 m 25 m

22 m 25 m

20 m 24 m 18 m

The pixel for which slope is being calculated is shaded.The elevations of the neighboring pixels are given in meters.

Δx 15 3 2–+3

------------------------ 5.33= = Δy 10– 4– 7+3

---------------------------- 2.33–= =

Page 682: ERDAS Field Guide

652 Terrain Analysis

Shaded Relief A shaded relief image provides an illustration of variations in elevation. Based on a user-specified position of the sun, areas that would be in sunlight are highlighted and areas that would be in shadow are shaded. Shaded relief images are generated from an elevation surface, alone or in combination with an image file draped over the terrain.It is important to note that the relief program identifies shadowed areas—i.e., those that are not in direct sun. It does not calculate the shadow that is cast by topographic features onto the surrounding surface. For example, a high mountain with sunlight coming from the northwest would be symbolized as follows in shaded relief. Only the portions of the mountain that would be in shadow from a northwest light would be shaded. The software would not simulate a shadow that the mountain would cast on the southeast side.

Figure 217: Shaded Relief

Shaded relief images are an effective graphic tool. They can also be used in analysis, e.g., snow melt over an area spanned by an elevation surface. A series of relief images can be generated to simulate the movement of the sun over the landscape. Snow melt rates can then be estimated for each pixel based on the amount of time it spends in sun or shadow. Shaded relief images can also be used to enhance subtle detail in gray scale images such as aeromagnetic, radar, gravity maps, etc.

Use the Shaded Relief function in Image Interpreter to generate a relief image.

θ 5.332.33–

-------------⎝ ⎠⎛ ⎞tan 1– 1.98= =

=

3040

50

in sun shaded

This condition produces... this... not this

Page 683: ERDAS Field Guide

Terrain Analysis 653

In calculating relief, the software compares the user-specified sun position and angle with the angle each pixel faces. Each pixel is assigned a value between -1 and +1 to indicate the amount of light reflectance at that pixel.

• Negative numbers and zero values represent shadowed areas.

• Positive numbers represent sunny areas, with +1 assigned to the areas of highest reflectance.

The reflectance values are then applied to the original pixel values to get the final result. All negative values are set to 0 or to the minimum light level specified by you. These indicate shadowed areas. Light reflectance in sunny areas falls within a range of values depending on whether the pixel is directly facing the sun or not. (In the example above, pixels facing northwest would be the brightest. Pixels facing north-northwest and west-northwest would not be quite as bright.)In a relief file, which is a DEM that shows surface relief, the surface reflectance values are multiplied by the color lookup values for the image file.

Topographic Normalization

Digital imagery from mountainous regions often contains a radiometric distortion known as topographic effect. Topographic effect results from the differences in illumination due to the angle of the sun and the angle of the terrain. This causes a variation in the image brightness values. Topographic effect is a combination of:

• incident illumination —the orientation of the surface with respect to the rays of the sun

• exitance angle—the amount of reflected energy as a function of the slope angle

• surface cover characteristics—rugged terrain with high mountains or steep slopes (Hodgson and Shelley, 1994)

One way to reduce topographic effect in digital imagery is by applying transformations based on the Lambertian or Non-Lambertian reflectance models. These models normalize the imagery, which makes it appear as if it were a flat surface.

The Topographic Normalize function in Image Interpreter uses a Lambertian Reflectance model to normalize topographic effect in VIS/IR imagery.

Page 684: ERDAS Field Guide

654 Terrain Analysis

When using the Topographic Normalization model, the following information is needed:

• solar elevation and azimuth at time of image acquisition

• DEM file

• original imagery file (after atmospheric corrections)

Lambertian Reflectance Model

The Lambertian Reflectance model assumes that the surface reflects incident solar energy uniformly in all directions, and that variations in reflectance are due to the amount of incident radiation. The following equation produces normalized brightness values (Colby, 1991; Smith et al, 1980):

BVnormal λ= BV observed λ / cos iWhere:

BVnormal λ = normalized brightness valuesBVobserved λ= observed brightness valuescos i = cosine of the incidence angle

Incidence AngleThe incidence angle is defined from:

cos i = cos (90 - θs) cos θn + sin (90 - θs) sin θn cos (φs - φn)Where:

i = the angle between the solar rays and the normal to the surface

θs = the elevation of the sunφs = the azimuth of the sunθn = the slope of each surface elementφn = the aspect of each surface element

If the surface has a slope of 0 degrees, then aspect is undefined and i is simply 90 - θs.

Non-Lambertian Model Minnaert (Minnaert and Szeicz, 1961) proposed that the observed surface does not reflect incident solar energy uniformly in all directions. Instead, he formulated the Non-Lambertian model, which takes into account variations in the terrain. This model, although more computationally demanding than the Lambertian model, may present more accurate results.In a Non-Lambertian Reflectance model, the following equation is used to normalize the brightness values in the image (Colby, 1991; Smith et al, 1980):

BVnormal λ= (BVobserved λ cos e) / (cosk i cosk e)

Page 685: ERDAS Field Guide

Terrain Analysis 655

Where:BVnormal λ = normalized brightness valuesBVobserved λ = observed brightness valuescos i = cosine of the incidence anglecos e = cosine of the exitance angle, or slope anglek = the empirically derived Minnaert constant

Minnaert ConstantThe Minnaert constant (k) may be found by regressing a set of observed brightness values from the remotely sensed imagery with known slope and aspect values, provided that all the observations in this set are the same type of land cover. The k value is the slope of the regression line (Hodgson and Shelley, 1994):

log (BVobserved λ cos e) = log BVnormal λ+ k log (cos i cos e)

Use the Spatial Modeler to create a model based on the Non-Lambertian model.

NOTE: The Non-Lambertian model does not detect surfaces that are shadowed by intervening topographic features between each pixel and the sun. For these areas, a line-of-sight algorithm can identify such shadowed pixels.

Page 686: ERDAS Field Guide

656 Terrain Analysis

Page 687: ERDAS Field Guide

657Radar Concepts

Radar Concepts 657

Radar Concepts

Introduction Radar images are quite different from other remotely sensed imagery you might use with ERDAS IMAGINE software. For example, radar images may have speckle noise. Radar images, do, however, contain a great deal of information. ERDAS IMAGINE has many radar packages, including IMAGINE Radar Interpreter, IMAGINE OrthoRadar, IMAGINE StereoSAR DEM, IMAGINE InSAR, IMAGINE Coherence Change Detection (CCD), IMAGINE D-InSAR, and the SAR Metadata Editor with which you can analyze your radar imagery.You have already learned about the various methods of speckle suppression—those are IMAGINE Radar Interpreter functions. This chapter tells you about the advanced Synthetic Aperture Radar (SAR) processing packages that ERDAS IMAGINE has to offer. The following sections go into detail about the geometry and functionality of those modules of the IMAGINE Radar Mapping Suite.

IMAGINE OrthoRadar Theory

Parameters Required for Orthorectification

SAR image orthorectification requires certain information about the sensor and the SAR image. Different sensors (TerraSAR-X, RADARSAT-2, and so forth) express these parameters in different ways and in different units. To simplify the design of our SAR tools and easily support future sensors, all SAR images and sensors are described using our SAR node model. The sensor-specific parameters are converted to a SAR node model on import or direct-read.The following table lists the parameters of the SAR model and their units. These parameters can be viewed in the SAR Model tab on the main SAR Model Properties (IMAGINE OrthoRadar) dialog.

Table 53: SAR Parameters Required for Georeferencing

Parameter Description Units

sensor The sensor that produced the image

orbit_direction Ascending (S->N) or descending (N->S)

data_type The data type

mode The original sensor acquisition mode

coord_sys Coordinate system for ephemeris ('I' = Inertial, 'F' = Fixed Body or Earth rotating)

Page 688: ERDAS Field Guide

658 Radar Concepts

year Year of image data collection

month Month of image data collection

day Day of image data collection

doy Greenwich Mean Time (GMT) day of year of image data collection

num_samples Number of samples in each line in the image (in range)

num_lines Number of lines in the image (in azimuth)

first_pt_secs_of_day Time of first ephemeris point (This parameter is updated during orbit adjustment)

seconds of day

first_pt_org Original time of first ephemeris point seconds of day

time_interval Time interval between ephemeris points (This parameter is updated during orbit adjustment)

seconds

time_interval_org Original time interval between ephemeris points in seconds

seconds

image_start_time Image start time (azimuth direction) seconds of day

image_end_time Image end time (azimuth direction) seconds of day

image_duration Time duration of images (azimuth direction) seconds

semimajor Semimajor spheroid axis of Earth model used during SAR processing

meters

semiminor Semiminor spheroid axis of Earth model used during SAR processing

meters

target_height Assumed height of scene above Earth model used during SAR processing

meters

look_side +90 = right-looking, -90 = left-looking degrees

local_incidence_angle Incidence angle relative to a flat horizontal terrain at scene center (0 degrees = vertical reference, 90 degrees = horizontal reference )

degrees

wavelength Wavelength of sensor meters

range_sampling_frequency Range sampling rate Hz

azimuth_sampling_frequency Azimuth sampling rate Hz

range_processing_bandwidth Range processing bandwidth Hz

azimuth_processing_bandwidth Azimuth processing bandwidth Hz

range_window_func Range window function

range_window_coefficient Range window coefficient

azimuth_window_func Azimuth window function

azimuth_window_coefficient Azimuth window coefficient

Table 53: SAR Parameters Required for Georeferencing

Parameter Description Units

Page 689: ERDAS Field Guide

Radar Concepts 659

fdc_early_azimuth Doppler centroid in early azimuth

fdc_late_azimuth Doppler centroid in late azimuth

range_pix_spacing Slant or ground range pixel resolution meters

azimuth_line_spacing Azimuth line resolution meters

near_slant_range Slant range to near range pixel meters

num_pos_pts Number of ephemeris points provided

projection Ground or slant range projection (either PAR_GROUND or PAR_SLANT)

gnd2slt_coeffs[6] Coefficients used in polynomial transform from ground to slant range plane

time_dir_pixels Time direction for increasing pixel (range) direction

time_dir_lines Time direction for increasing azimuth (line) direction

rsx Spacecraft X position (Earth Fixed Body coord system) in meters

meters

rsy Spacecraft Y position (Earth Fixed Body coord system) in meters

meters

rsz Spacecraft Z position (Earth Fixed Body coord system)

meters

vsx Spacecraft X velocity (Earth Fixed Body coord system)

meters per second

vsy Spacecraft Y velocity (Earth Fixed Body coord system)

meters per second

vsz Spacecraft Z velocity (Earth Fixed Body coord system)

meters per second

Calculated Ephemeris Coefficients

orbit_state Status flag which indicates if the orbit has been adjusted

rs_coeffs[9] Coefficients used to model the sensor orbit positions as a function of time

vs_coeffs[9] Coefficients used to model the sensor orbit velocities as a function of time

Subset Information Relative to the Original SAR Image

sub_unity_subset Status flag indicating that the entire image is present

sub_range_start Starting range sample of the current raster relative to the original image

Table 53: SAR Parameters Required for Georeferencing

Parameter Description Units

Page 690: ERDAS Field Guide

660 Radar Concepts

Algorithm Description

OverviewThe rectification process consists of several steps:

• ephemeris modeling and refinement (if GCPs are provided)

• sparse mapping grid generation

• output formation (including terrain corrections)

Each of these steps is described in detail in the following sections.

Ephemeris Coordinate SystemThe positions and velocities of the spacecraft are internally assumed to be in an Earth Fixed Body coordinate system. If the ephemeris are provided in inertial coordinate system, IMAGINE OrthoRadar converts them from inertial to Earth Fixed Body coordinates.The Earth Fixed Body coordinate system is an Earth-centered Cartesian coordinate system that rotates with the Earth. The x-axis radiates from the center of the Earth through the 0 longitude point on the equator. The z-axis radiates from the center of the Earth through the geographic North Pole. The y-axis completes the right-handed Cartesian coordinate system.

sub_range_end Ending range sample of the current raster relative to the original image

sub_range_degrade Range sample compression factor of the current raster relative to the original image

sub_range_num_samples Range number of samples of the current raster image

sub_azimuth_start Starting azimuth line of the current raster relative to the original image

sub_azimuth_end Ending azimuth line of the current raster relative to the original image

sub_azimuth_degrade Azimuth line compression factor of the current raster relative to the original image

sub_azimuth_num_lines Azimuth number of lines of the current raster image

Table 53: SAR Parameters Required for Georeferencing

Parameter Description Units

Page 691: ERDAS Field Guide

Radar Concepts 661

Ephemeris ModelingThe platform ephemeris is described by three or more platform locations and velocities. To predict the platform position and velocity at some time (t):

Rs,x = a1 + a2 t + a3 t2

Rs,y = b1 + b2 t + b3 t2

Rs,z = c1 + c2 t + c3 t2

Vs,x = d1 + d2 t + d3 t2

Vs,y = e1 + e2 t + e3 t2

Vs,z = f1 + f2 t + f3 t2

Where Rs is the sensor position and Vs is the sensor velocity:

Rs = [ Rs,x Rs,y Rs,z] T

Vs = [ Vs,x Vs,x Vs,x] T

To determine the model coefficients {ai, bi, ci} and {di, ei, fi}, first do some preprocessing. Select the best three consecutive data points prior to fitting (if more than three points are available). The best three data points must span the entire image in time. If more than one set of three data points spans the image, then select the set of three that has a center time closest to the center time of the image.Once a set of three consecutive data points is found, model the ephemeris with an exact solution.Form matrix A:

Where t1, t2, and t3 are the times associated with each platform position. Select t such that t = 0.0 corresponds to the time of the second position point. Form vector b:

b = [ Rs,x(1) Rs,x(2) Rs,x(3) ] T

Where Rs,x(i) is the x-coordinate of the i-th platform position (i =1: 3). We wish to solve Ax = b where x is:

x = [ a1 a2 a3 ] T

A

1.0 t1 t12

1.0 t2 t12

1.0 t3 t32

=

Page 692: ERDAS Field Guide

662 Radar Concepts

To do so, use LU decomposition. The process is repeated for: Rs,y, Rs,z, Vs,x, Vs,y, and Vs,z

SAR Imaging ModelBefore discussing the ephemeris adjustment, it is important to understand how to get from a pixel in the SAR image (as specified by a range line and range pixel) to a target position on the Earth [specified in Earth Centered System (ECS) coordinates or x, y, z]. This process is used throughout the ephemeris adjustment and the rectification itself.For each range line and range pixel in the SAR image, the corresponding target location (Rt) is determined. The target location can be described as (lat, lon, elev) or (x, y, z) in ECS. The target can either lie on a smooth Earth ellipsoid or on a smooth Earth ellipsoid plus an elevation model.In either case, the location of Rt is determined by finding the intersection of the Doppler cone, range sphere, and Earth model. In order to do this, first find the Doppler centroid and slant range for a given SAR image pixel.Let i = range pixel and j = range line.

TimeTime T(j) is thus:

Where T(0) is the image start time, Na is the number of range lines, and tdur is the image duration time.

Doppler CentroidThe computation of the Doppler centroid fd to use with the SAR imaging model depends on how the data was processed. If the data was deskewed, this value is always 0. If the data is skewed, then this value may be a nonzero constant or may vary with i.

Slant RangeThe computation of the slant range to the pixel i depends on the projection of the image. If the data is in a slant range projection, then the computation of slant range is straightforward:

Where Rsl(i) is the slant range to pixel i, rsl is the near slant range, and Δrsr is the slant range pixel spacing.

T j( ) T 0( ) j 1–Na 1–--------------- tdur×+=

Rsl i( ) rsl i 1–( ) Δrsr×+=

Page 693: ERDAS Field Guide

Radar Concepts 663

If the projection is a ground range projection, then this computation is potentially more complicated and depends on how the data was originally projected into a ground range projection by the SAR processor.

Intersection of Doppler Cone, Range Sphere, and Earth ModelTo find the location of the target Rt corresponding to a given range pixel and range line, the intersection of the Doppler cone, range sphere, and Earth model must be found. For an ellipsoid, these may be described as follows:

fD > 0 for forward squint

Rsl = | Rs - Rt |

Where Rs and Vs are the platform position and velocity respectively, Vt is the target velocity ( = 0, in this coordinate system), Re is the Earth semimajor axis, and Rm is the Earth semiminor axis. The platform position and velocity vectors Rs and Vs can be found as a function of time T(j) using the ephemeris equations developed previously.Figure 218 graphically illustrates the solution for the target location given the sensor ephemeris, doppler cone, range sphere, and flat Earth model.

fD2

λRsl---------- Rs Rt–( ) Vs Vt–( )⋅=

Rs x( )2 Rs y( )2+

Re htarg+( )2--------------------------------------

Rs z( )2

Rm htarg+( )2------------------------------- 1=+

Page 694: ERDAS Field Guide

664 Radar Concepts

Figure 218: Doppler Cone

Ephemeris AdjustmentThere are three possible adjustments that can be made: along track, cross track, and radial. In IMAGINE OrthoRadar, the along track adjustment is performed separately. The cross track and radial adjustments are made simultaneously. These adjustments are made using residuals associated with GCPs. Each GCP has a map coordinate (such as lat, lon) and an elevation. Also, an SAR image range line and range pixel must be given. The SAR image range line and range pixel are converted to Rt using the method described previously (substituting htarg = elevation of GCP above ellipsoid used in SAR processing).The along track adjustment is computed first, followed by the cross track and radial adjustments. The two adjustment steps are then repeated.

For more information, consult SAR Geocoding: Data and Systems Gunter Schreier, Ed.

OrthorectificationThe ultimate goal in orthorectification is to determine, for a given target location on the ground, the associated range line and range pixel from the input SAR image, including the effects of terrain.

Vs

Rt(on Earth model)Rs

(Earth center)

Page 695: ERDAS Field Guide

Radar Concepts 665

To do this, there are several steps. First, take the target location and locate the associated range line and range pixel from the input SAR image assuming smooth terrain. This places you in approximately the correct range line. Next, look up the elevation at the target from the input DEM. The elevation, in combination with the known slant range to the target, is used to determine the correct range pixel. The data can now be interpolated from the input SAR image.

Sparse Mapping GridSelect a block size M. For every Mth range line and Mth range pixel, compute Rt on a smooth ellipsoid (using the SAR Earth model), and save these values in an array. Smaller M implies less distortion between grid points. Regardless of M and the total number of samples and lines in the input SAR image, always compute Rt at the end of every line and for the very last line. The spacing between points in the sparse mapping grid is regular except at the far edges of the grid.

Output FormationFor each point in the output grid, there is an associated Rt. This target should fall on the surface of the Earth model used for SAR processing, thus a conversion is made between the Earth model used for the output grid and the Earth model used during SAR processing.The process of orthorectification starts with a location on the ground. The line and pixel location of the pixel to this map location can be determined from the map location and the sparse mapping grid. The value at this pixel location is then assigned to the map location. Figure 219 illustrates this process.

Figure 219: Sparse Mapping and Output Grids

Rt

output grid

sparse mapping grid

Page 696: ERDAS Field Guide

666 Radar Concepts

Conversion to Real / Imaginary DataThe formula used for converting a block of complex 1-layer data into a 2-layer IQ (real/imaginary) output datastack, or converting a block of 2-layer input data into a 2-layer IQ (real/imaginary) output datastack is shown here. The conversion is available in the Radar Conversions dialog. The magnitude and phase data is represented in Figure 220.

real = magnitude * cos(phase)imaginary = magnitude * sin(phase)

The above mathematical notation is 100% equivalent, simply expressed in a different form.For radar systems, a complex number implies that the representation of a signal, or data file, needs measures of both magnitude and phase. In the context of digital SAR, a complex number can be also be represented by the real in-phase component (I) and the imaginary quadrature component (Q). The role of complex numbers is an essential part of the signal as signal phase is used to obtain high resolution.Source: European Space Agency, 2010a and MathResources, 2010

• The set {magnitude, phase}, or {r,θ} represents coordinates in polar form.

• The set {real, imaginary}, or { I, Q } represents coordinates in Cartesian form.

Figure 220: Magnitude and Phase Data as shown in the complex plane

Imaginary

0Real

phase

r

θ

Imaginary part, Q

Real part, I

Complex number

Page 697: ERDAS Field Guide

Radar Concepts 667

Where:Real = Real axisImaginary = Imaginary axisr = Magnitude is defined as a dimensionless number

describing the length of the backscatter vector in the complex domain.

phase = Relative interferometric phase angle in radians in the range 0 to 2*π, measured from the positive Real axis towards the backscatter vector.

IMAGINE StereoSAR DEM Theory

Introduction This chapter details the theory that supports IMAGINE StereoSAR DEM processing.To understand the way IMAGINE StereoSAR DEM works to create DEMs, it is first helpful to look at the process from beginning to end. Figure 221 shows a stylized process for basic operation of the IMAGINE StereoSAR DEM module.

Page 698: ERDAS Field Guide

668 Radar Concepts

Figure 221: IMAGINE StereoSAR DEM Process Flow

The following discussion includes algorithm descriptions as well as discussion of various processing options and selection of processing parameters.

Input There are many elements to consider in the Input step. These include beam mode selection, importing files, orbit correction, and ephemeris data.

Beam Mode SelectionFinal accuracy and precision of the DEM produced by the IMAGINE StereoSAR DEM module is predicated on two separate calculation sequences. These are the automatic image correlation and the sensor position/triangulation calculations. These two calculation sequences are joined in the final step: Height.

Import

Image 1 Image 2GCPs

AffineCoregistration

Image 2

Automatic Image Correlation

Parallax File

Range/Doppler Stereo Intersection

Sensor-based DEM

Resample Reproject

Digital Elevation Map

TiePoints

Page 699: ERDAS Field Guide

Radar Concepts 669

The two initial calculation sequences have disparate beam mode demands. Automatic correlation works best with images acquired with as little angular divergence as possible. This is because different imaging angles produce different-looking images, and the automatic correlator is looking for image similarity. The requirement of image similarity is the same reason images acquired at different times can be hard to correlate. For example, images taken of agricultural areas during different seasons can be extremely different and, therefore, difficult or impossible for the automatic correlator to process successfully. Conversely, the triangulation calculation is most accurate when there is a large intersection angle between the two images (see Figure 222). This results in images that are truly different due to geometric distortion. The ERDAS IMAGINE automatic image correlator has proven sufficiently robust to match images with significant distortion if the proper correlator parameters are used.

Figure 222: SAR Image Intersection

NOTE: IMAGINE StereoSAR DEM has built-in checks that assure the sensor associated with the Reference image is closer to the imaged area than the sensor associated with the Match image.

A third factor, cost effectiveness, must also often be evaluated. First, select either Fine or Standard Beam modes. Fine Beam images with a pixel size of six meters would seem, at first glance, to offer a much better DEM than Standard Beam with 12.5-meter pixels. However, a Fine Beam image covers only one-fourth the area of a Standard Beam image and produces a DEM only minimally better.

Incidence angles

Elevation of point

Look angles

Page 700: ERDAS Field Guide

670 Radar Concepts

Various Standard Beam combinations, such as an S3/S6 or an S3/S7, cover a larger area per scene, but only for the overlap area which might be only three-quarters of the scene area. Testing at both ERDAS and RADARSAT has indicated that a stereopair consisting of a Wide Beam mode 2 image and a Standard Beam mode 7 image produces the most cost-effective DEM at a resolution consistent with the resolution of the instrument and the technique.

ImportThe imagery required for the IMAGINE StereoSAR module can be imported using the ERDAS IMAGINE radar-specific importers for either RADARSAT or ESA (ERS-1, ERS-2). These importers automatically extract data from the image header files and store it in an Hfa file attached to the image. In addition, they abstract key parameters necessary for sensor modeling and attach these to the image as a generic SAR Node Hfa file. Other radar imagery (for example, SIR-C) can be imported using the Generic Binary Importer. The SAR Metadata Editor can then be used to attach the generic SAR Node Hfa file.

Orbit CorrectionExtensive testing of both the IMAGINE OrthoRadar and IMAGINE StereoSAR modules has indicated that the ephemeris data from the RADARSAT and the ESA radar satellites is very accurate (see appended accuracy reports). However, the accuracy does vary with each image, and there is no a priori way to determine the accuracy of a particular data set. The modules of the IMAGINE Radar Mapping Suite: Coherence Change Detection, IMAGINE InSAR, IMAGINE StereoSAR, and IMAGINE OrthoRadar allow for correction of the sensor model using GCPs. Since the supplied orbit ephemeris is very accurate, orbit correction should only be attempted if you have very good GCPs. In practice, it has been found that GCPs from 1:24 000 scale maps or a handheld GPS are the minimum acceptable accuracy. In some instances, a single accurate GCP has been found to result in a significant increase in accuracy. As with image warping, a uniform distribution of GCPs results in a better overall result and a lower RMS error. Again, accurate GCPs are an essential requirement. If your GCPs are questionable, you are probably better off not using them. Similarly, the GCP must be recognizable in the radar imagery to within plus or minus one to two pixels. Road intersections, reservoir dams, airports, or similar man-made features are usually best. Lacking one very accurate and locatable GCP, it would be best to utilize several good GCPs dispersed throughout the image as would be done for a rectification.

Page 701: ERDAS Field Guide

Radar Concepts 671

Ellipsoid vs. Geoid Heights

The IMAGINE Radar Mapping Suite is based on the World Geodetic System WGS 84 Earth ellipsoid. The sensor model uses this ellipsoid for the sensor geometry. For maximum accuracy, all GCPs used to refine the sensor model for all IMAGINE Radar Mapping Suite modules should be converted to this ellipsoid in all three dimensions: latitude, longitude, and elevation. Note that, while ERDAS IMAGINE reprojection converts latitude and longitude to UTM WGS 84 for many input projections, it does not modify the elevation values. To do this, it is necessary to determine the elevation offset between WGS 84 and the datum of the input GCPs. For some input datums this can be accomplished using the Web site: www.ngs.noaa.gov/GEOID/geoid.html. This offset must then be added to, or subtracted from, the input GCP. Many handheld GPS units can be set to output in WGS 84 coordinates.One elegant feature of the IMAGINE StereoSAR DEM module is that orbit refinement using GCPs can be applied at any time in the process flow without losing the processing work to that stage. The stereopair can even be processed all the way through to a final DEM and then you can go back and refine the orbit. This refined orbit is transferred through all the intermediate files (Subset, Despeckle, and so forth.). Only the final step Height would need to be rerun using the new refined orbit model. The ephemeris normally received with RADARSAT, or ERS-1 and ERS-2 imagery is based on an extrapolation of the sensor orbit from previous positions. If the satellite received an orbit correction command, this effect might not be reflected in the previous position extrapolation. The receiving stations for both satellites also do ephemeris calculations that include post image acquisition sensor positions. These are generally more accurate. They are not, unfortunately, easy to acquire and attach to the imagery.

Refined Ephemeris

For information, see IMAGINE InSAR Theory on page 679.

Subset Use of the Subset option is straightforward. It is not necessary that the two subsets define exactly the same area: an approximation is acceptable. This option is normally used in two circumstances. First, it can be used to define a small subset for testing correlation parameters prior to running a full scene. Also, it would be used to constrain the two input images to only the overlap area. Constraining the input images is useful for saving data space, but is not necessary for functioning of IMAGINE StereoSAR DEM — it is purely optional.

Page 702: ERDAS Field Guide

672 Radar Concepts

Despeckle The functions to despeckle the images prior to automatic correlation are optional. The rationale for despeckling at this time is twofold. One, image speckle noise is not correlated between the two images: it is randomly distributed in both. Thus, it only serves to confuse the automatic correlation calculation. Presence of speckle noise could contribute to false positives during the correlation process. Secondly, as discussed under Beam Mode Selection, the two images the software is trying to match are different due to viewing geometry differences. The slight low-pass character of the despeckle algorithm may actually move both images toward a more uniform appearance, which aids automatic correlation. Functionally, the despeckling algorithms presented here are identical to those available in the IMAGINE Radar Interpreter. In practice, a 3 × 3 or 5 × 5 kernel has been found to work acceptably. Note that all ERDAS IMAGINE speckle reduction algorithms allow the kernel to be tuned to the image being processed via the Coefficient of Variation. Calculation of this parameter is accessed through the IMAGINE Radar Interpreter Speckle Suppression interface.

See the ERDAS IMAGINE Professional Tour Guide for the IMAGINE Radar Interpreter tour guide.

Degrade The Degrade option offered at this step in the processing is commonly used for two purposes. If the input imagery is Single Look Complex (SLC), the pixels are not square (this is shown as the Range and Azimuth pixel spacing sizes). It may be desirable at this time to adjust the Y scale factor to produce pixels that are more square. This is purely an option; the software accurately processes undegraded SLC imagery. Secondly, if data space or processing time is limited, it may be useful to reduce the overall size of the image file while still processing the full images. Under those circumstances, a reduction of two or three in both X and Y might be appropriate. Note that the processing flow recommended for maximum accuracy processes the full resolution scenes and correlates for every pixel. Degrade is used subsequent to Match to lower DEM variance (LE90) and increase pixel size to approximately the desired output posting.

RescaleThis operation converts the input imagery bit format, commonly unsigned 16-bit, to unsigned 8-bit using a two standard deviations stretch. This is done to reduce the overall data file sizes. Testing has not shown any advantage to retaining the original 16-bit format, and use of this option is routinely recommended.

Page 703: ERDAS Field Guide

Radar Concepts 673

Coregister Coregister is the first of the Process Steps (other than Input) that must be done. This operation serves two important functions, and proper user input at this processing level affects the speed of subsequent processing, and may affect the accuracy of the final output DEM. The coregistration operation uses an affine transformation to rotate the Match image so that it more closely aligns with the Reference image. The purpose is to adjust the images so that the elevation-induced pixel offset (parallax) is mostly in the range (x-axis) direction (that is, the images are nearly epipolar). Doing this greatly reduces the required size of the search window in the Match step. One output of this step is the minimum and maximum parallax offsets, in pixels, in both the x- and y-axis directions. These values must be recorded by the operator and are used in the Match step to tune the IMAGINE StereoSAR DEM correlator parameter file (.ssc). These values are critical to this tuning operation and, therefore, must be correctly extracted from the Coregister step. Two basic guidelines define the selection process for the tie points used for the coregistration. First, as with any image-to-image coregistration, a better result is obtained if the tie points are uniformly distributed throughout the images. Second, since you want the calculation to output the minimum and maximum parallax offsets in both the x- and y-axis directions, the tie points selected must be those that have the minimum and maximum parallax. In practice, the following procedure has been found successful. First, select a fairly uniform grid of about eight tie points that defines the lowest elevation within the image. Coastlines, river flood plains, roads, and agricultural fields commonly meet this criteria. Use of the Solve Geometric Model icon on the StereoSAR Coregistration Tool should yield values in the -5 to +5 range at this time. Next, identify and select three or four of the highest elevations within the image. After selecting each tie point, click the Solve Geometric Model icon and note the effect of each tie point on the minimum and maximum parallax values. When you feel you have quantified these values, write them down and apply the resultant transform to the image.

Match An essential component, and the major time-saver of the IMAGINE StereoSAR DEM software is automatic image correlation. In automatic image correlation, a small subset (image chip) of the Reference image termed the template (see Figure 223), is compared to various regions of the Match image’s search area (Figure 224) to find the best Match point. The center pixel of the template is then said to be correlated with the center pixel of the Match region. The software then proceeds to the next pixel of interest, which becomes the center pixel of the new template. Figure 223 shows the upper left (UL) corner of the Reference image. An 11 × 11 pixel template is shown centered on the pixel of interest: X = 8, Y = 8.

Page 704: ERDAS Field Guide

674 Radar Concepts

Figure 223: UL Corner of the Reference Image

Figure 224 shows the UL corner of the Match image. The 11 × 11 pixel template is shown centered on the initial estimated correlation pixel X = 8, Y = 8. The 15 × 7 pixel search area is shown in a dashed line. Since most of the parallax shift is in the range direction (x-axis), the search area should always be a rectangle to minimize search time.

Figure 224: UL Corner of the Match Image

The ERDAS IMAGINE automatic image correlator works on the hierarchical pyramid technique. This means that the image is successively reduced in resolution to provide a coregistered set of images of increasing pixel size (see Figure 225). The automatic correlation software starts at the top of the resolution pyramid with the lowest resolution image being processed first. The results of this process are filtered and interpolated before being passed to the next highest resolution layer as the initial estimated correlation point. From this estimated point, the search is performed on this higher resolution layer.

123456789101112131415

1 2 3 4 5 6 7 8 9 101112 1413 15

123456789101112131415

1 2 3 4 5 6 7 8 9 101112 1413 15

Page 705: ERDAS Field Guide

Radar Concepts 675

Figure 225: Image Pyramid

Template SizeThe size of the template directly affects computation time: a larger image chip takes more time. However, too small of a template could contain insufficient image detail to allow accurate matching. A balance must be struck between these two competing criteria, and is somewhat image-dependent. A suitable template for a suburban area with roads, fields, and other features could be much smaller than the required template for a vast region of uniform ground cover. Because of viewing geometry-induced differences in the Reference and Match images, the template from the Reference image is never identical to any area of the Match image. The template must be large enough to minimize this effect.The IMAGINE StereoSAR DEM correlator parameters shown in Table 54 are for the library file Std_LP_HD.ssc. These parameters are appropriate for a RADARSAT Standard Beam mode (Std) stereopair with low parallax (LP) and high density of detail (HD). The low parallax parameters are appropriate for images of low to moderate topography. The high density of detail (HD) parameters are appropriate for the suburban area discussed above.

Level 2

Full resolution (1:1)

256 × 256 pixels

Level 1512 × 512 pixels

Level 3128 × 128 pixels

Level 464 × 64 pixels Matching starts

Matching ends

Resolution of 1:8

Resolution of 1:4

Resolution of 1:2

and

on level 4

on level 1

Table 54: STD_LP_HD Correlator

Level Average Size X Size Y Search -X Search +X Search -Y Search +Y

1 1 20 20 2 2 1 1

2 2 60 60 3 4 1 1

Page 706: ERDAS Field Guide

676 Radar Concepts

Note that the size of the template (Size X and Size Y) increases as you go up the resolution pyramid. This size is the effective size if it were on the bottom of the pyramid (that is, the full resolution image). Since they are actually on reduced resolution levels of the pyramid, they are functionally smaller. Thus, the 220 × 220 template on Level 6 is actually only 36 × 36 during the actual search. By stating the template size relative to the full resolution image, it is easy to display a box of approximate size on the input image to evaluate the amount of detail available to the correlator, and thus optimize the template sizes.

Search AreaConsiderable computer time is expended in searching the Match image for the exact Match point. Thus, this search area should be minimized. (In addition, searching too large of an area increases the possibility of a false match.) For this reason, the software first requires that the two images be coregistered. This gives the software a rough idea of where the Match point might be. In stereo DEM generation, you are looking for the offset of a point in the Match image from its corresponding point in the Reference image (parallax). The minimum and maximum displacement is quantified in the Coregister step and is used to restrain the search area.

3 3 90 90 8 20 2 3

4 4 120 120 10 30 2 5

5 5 180 180 20 60 2 8

6 6 220 220 25 70 3 10

Table 54: STD_LP_HD Correlator

Level Average Size X Size Y Search -X Search +X Search -Y Search +Y

Level Step X Step Y Threshold Value Vector X Vector Y Applied

1 2 2 0.30000 0.00000 0.00000 0.00000 0

2 8 8 0.20000 0.00000 0.00000 0.00000 0

3 20 20 0.20000 0.00000 0.00000 0.00000 0

4 50 50 0.20000 0.00000 0.00000 0.00000 0

5 65 65 0.20000 0.00000 0.00000 0.00000 0

6 80 80 0.10000 0.00000 0.00000 0.00000 0

Page 707: ERDAS Field Guide

Radar Concepts 677

In Figure 224, the search area is defined by four parameters: -X, +X, -Y, and +Y. Most of the displacement in radar imagery is a function of the look angle (Figure 222) and is in the range or x-axis direction. Thus, the search area is always a rectangle emphasizing the x-axis. Because the total search area (and, therefore, the total time) is X times Y, it is important to keep these values to a minimum. Careful use at the Coregister step easily achieves this.

Step SizeBecause a radar stereopair typically contains millions of pixels, it is not desirable to correlate every pixel at every level of the hierarchal pyramid, nor is this even necessary to achieve an accurate result. The density at which the automatic correlator is to operate at each level in the resolution pyramid is determined by the step size (posting). The approach used is to keep posting tighter (smaller step size) as the correlator works down the resolution pyramid. For maximum accuracy, it is recommended to correlate every pixel at the full resolution level. This result is then compressed by the Degrade step to the desired DEM cell size.

ThresholdThe degree of similarity between the Reference template and each possible Match region within the search area must be quantified by a mathematical metric. IMAGINE StereoSAR DEM uses the widely accepted normalized correlation coefficient. The range of possible values extends from -1 to +1, with +1 being an identical match. The algorithm uses the maximum value within the search area as the correlation point. The threshold in Table 54 is the minimum numerical value of the normalized correlation coefficient that is accepted as a correlation point. If no value within the entire search area attains this minimum, there is not a Match point for that level of the resolution pyramid. In this case, the initial estimated position, passed from the previous level of the resolution pyramid, is retained as the Match point.

Correlator LibraryTo aid both the novice and the expert in rapidly selecting and refining an IMAGINE StereoSAR DEM correlator parameter file for a specific image pair, a library of tested parameter files has been assembled and is included with the software. These files are labeled using the following syntax: (RADARSAT Beam mode)_(Magnitude of Parallax)_(Density of Detail).

Page 708: ERDAS Field Guide

678 Radar Concepts

RADARSAT Beam ModeCorrelator parameter files are available for both Standard (Std) and Fine (Fine) Beam modes. An essential difference between these two categories is that, with the Fine Beam mode, more pixels are required (that is, a larger template) to contain the same number of image features than with a Standard Beam image of the same area.

Magnitude of ParallaxThe magnitude of the parallax is divided into high parallax (_HP) and low parallax (_LP) options. This determination is based upon the elevation changes and slopes within the images and is somewhat subjective. This parameter determines the size of the search area.

Density of DetailThe level of detail within each template is divided into high density (_HD) and low density (_LD) options. The density of detail for a suburban area with roads, fields, and other features would be much higher than the density of detail for a vast region of uniform ground cover. This parameter, in conjunction with beam mode, determines the required template sizes.

Quick TestsIt is often advantageous to quickly produce a low resolution DEM to verify that the automatic image correlator is optimum before correlating on every pixel to produce the final DEM. For this purpose, a Quick Test (_QT) correlator parameter file has been provided for each of the full resolution correlator parameter files in the .ssc library. These correlators process the image only through resolution pyramid Level 3. Processing time up to this level has been found to be acceptably fast, and testing has shown that if the image is successfully processed to this level, the correlator parameter file is probably appropriate. Evaluation of the parallax files produced by the Quick Test correlators and subsequent modification of the correlator parameter file is discussed in "IMAGINE StereoSAR DEM Application" in the IMAGINE Radar Mapping Suite Tour Guide.

Degrade The second Degrade step compresses the final parallax image file (Level 1). While not strictly necessary, it is logical and has proven advantageous to reduce the pixel size at this time to approximately the intended posting of the final output DEM. Doing so at this time decreases the variance (LE90) of the final DEM through averaging.

Page 709: ERDAS Field Guide

Radar Concepts 679

Height This step combines the information from the above processing steps to derive surface elevations. The sensor models of the two input images are combined to derive the stereo intersection geometry. The parallax values for each pixel are processed through this geometric relationship to derive a DEM in sensor (pixel) coordinates. Comprehensive testing of the IMAGINE StereoSAR DEM module has indicated that, with reasonable data sets and careful work, the output DEM falls between DTED Level I and DTED Level II. This corresponds to between USGS 30-meter and USGS 90-meter DEMs. Thus, an output pixel size of 40 to 50 meters is consistent with this expected precision.The final step is to resample and reproject this sensor DEM in to the desired final output DEM. The entire ERDAS IMAGINE reprojection package is accessed within the IMAGINE StereoSAR DEM module.

IMAGINE InSAR Theory

Introduction Terrain height extraction is one of the most important applications for SAR images. There are two basic techniques for extracting height from SAR images: stereo and interferometry. Stereo height extraction is much like the optical process and is discussed in IMAGINE StereoSAR DEM Theory on page 667. The subject of this section is SAR interferometry (InSAR).Height extraction from InSAR takes advantage of one of the unique qualities of SAR images: distance information from the sensor to the ground is recorded for every pixel in the SAR image. Unlike optical and IR images, which contain only the intensity of the energy received to the sensor, SAR images contain distance information in the form of phase. This distance is simply the number of wavelengths of the source radiation from the sensor to a given point on the ground. SAR sensors can record this information because, unlike optical and IR sensors, their radiation source is active and coherent. Unfortunately, this distance phase information in a single SAR image is mixed with phase noise from the ground and other effects. For this reason, it is impossible to extract just the distance phase from the total phase in a single SAR image. However, if two SAR images are available that cover the same area from slightly different vantage points, the phase of one can be subtracted from the phase of the other to produce the distance difference of the two SAR images (hence the term interferometry). This is because the other phase effects for the two images are approximately equal and cancel out each other when subtracted. What is left is a measure of the distance difference from one image to the other. From this difference and the orbit information, the height of every pixel can be calculated.

Page 710: ERDAS Field Guide

680 Radar Concepts

This chapter covers basic concepts and processing steps needed to extract terrain height from a pair of interferometric SAR images.

Electromagnetic Wave Background

In order to understand the SAR interferometric process, you must have a general understanding of electromagnetic waves and how they propagate. An electromagnetic wave is a changing electric field that produces a changing magnetic field that produces a changing electric field, and so on. As this process repeats, energy is propagated through empty space at the speed of light. Figure 226 gives a description of the type of electromagnetic wave that we are interested in. In this diagram, E indicates the electric field and H represents the magnetic field. The directions of E and H are mutually perpendicular everywhere. In a uniform plane, wave E and H lie in a plane and have the same value everywhere in that plane. A wave of this type with both E and H transverse to the direction of propagation is called a Transverse ElectroMagnetic (TEM) wave. If the electric field E has only a component in the y direction and the magnetic field H has only a component in the z direction, then the wave is said to be polarized in the y direction (vertically polarized). Polarization is generally defined as the direction of the electric field component with the understanding that the magnetic field is perpendicular to it.

Figure 226: Electromagnetic Wave

The electromagnetic wave described above is the type that is sent and received by an SAR. The SAR, like most equipment that uses electromagnetic waves, is only sensitive to the electric field component of the wave; therefore, we restrict our discussion to it. The electric field of the wave has two main properties that we must understand in order to understand SAR and interferometry. These are the magnitude and phase of the wave. Figure 227 shows that the electric field varies with time.

Direction ofpropagation

x

y

z

Ey

Hz

Page 711: ERDAS Field Guide

Radar Concepts 681

Figure 227: Variation of Electric Field in Time

The figure shows how the wave phase varies with time at three different moments. In the figure λ is the wavelength and T is the time required for the wave to travel one full wavelength. P is a point of constant phase and moves to the right as time progresses. The wave has a specific phase value at any given moment in time and at a specific point along its direction of travel. The wave can be expressed in the form of Equation 1.

Equation 1

Where:

Equation 1 is expressed in Cartesian coordinates and assumes that the maximum magnitude of Ey is unity. It is more useful to express this equation in exponential form and include a maximum term as in Equation 2.

Equation 2

π 2π 3π 4π

t 0=

λ P

t T4---=

t T2---=

P

P

Ey ωt βx+( )cos=

ω 2πT

------=

β 2πλ

------=

Ey E0 ej ωt βx±⟨ ⟩⋅=

Page 712: ERDAS Field Guide

682 Radar Concepts

So far we have described the definition and behavior of the electromagnetic wave phase as a function of time and distance. It is also important to understand how the strength or magnitude behaves with time and distance from the transmitter. As the wave moves away from the transmitter, its total energy stays the same but is spread over a larger distance. This means that the energy at any one point (or its energy density) decreases with time and distance as shown in Figure 228.

Figure 228: Effect of Time and Distance on Energy

The magnitude of the wave decreases exponentially as the distance from the transmitter increases. Equation 2 represents the general form of the electromagnetic wave that we are interested in for SAR and InSAR applications. Later, we further simplify this expression given certain restrictions of an SAR sensor.

The Interferometric Model

Most uses of SAR imagery involve a display of the magnitude of the image reflectivity and discard the phase when the complex image is magnitude-detected. The phase of an image pixel representing a single scatterer is deterministic; however, the phase of an image pixel represents multiple scatterers (in the same resolution cell), and is made up of both a deterministic and nondeterministic, statistical part. For this reason, pixel phase in a single SAR image is generally not useful. However, with proper selection of an imaging geometry, two SAR images can be collected that have nearly identical nondeterministic phase components. These two SAR images can be subtracted, leaving only a useful deterministic phase difference of the two images.Figure 229 provides the basic geometric model for an interferometric SAR system.

t, x

Page 713: ERDAS Field Guide

Radar Concepts 683

Figure 229: Geometric Model for an Interferometric SAR System

Where:A1 = antenna 1A2 = antenna 2Bi = baselineR1 = vector from antenna 1 to point of interestR2 = vector from antenna 2 to point of interestΨ = depression angle between R1 and baseline vectors

(= 90° - look angle)Zac = antenna 1 height

A rigid baseline Bi separates two antennas, A1 and A2. This separation causes the two antennas to illuminate the scene at slightly different depression angles relative to the baseline. Here, Ψ is the nominal depression angle from A1 to the scatterer relative to the baseline. The model assumes that the platform travels at constant velocity in the X direction while the baseline remains parallel to the Y axis at a constant height Zac above the XY plane.

The electromagnetic wave Equation 2 describes the signal data collected by each antenna. The two sets of signal data differ primarily because of the small differences in the data collection geometry. Complex images are generated from the signal data received by each antenna.As stated earlier, the phase of an image pixel represents the phase of multiple scatters in the same resolution cell and consists of both deterministic and unknown random components. A data collection for SAR interferometry adheres to special conditions to ensure that the random component of the phase is nearly identical in the two images. The deterministic phase in a single image is due to the two-way propagation path between the associated antenna and the target.

A1 A2Bi

R1

R2R1 R2–

ψ

Zac

X

Z

Page 714: ERDAS Field Guide

684 Radar Concepts

From our previously derived equation for an electromagnetic wave, and assuming the standard SAR configuration in which the perpendicular distance from the SAR to the target does not change, we can write the complex quantities representing a corresponding pair of image pixels, P1 and P2, from image 1 and image 2 as Equation 3 and Equation 4.

Equation 3

and

Equation 4

The quantities a1 and a2 represent the magnitudes of each image pixel. Generally, these magnitudes are approximately equal. The quantities θ1 and θ2 are the random components of pixel phase. They represent the vector summations of returns from all unresolved scatterers within the resolution cell and include contributions from receiver noise. With proper system design and collection geometry, they are nearly equal. The quantities Φ1 and Φ2 are the deterministic contribution to the phase of the image pixel. The desired function of the interferometer is to provide a measure of the phase difference, Φ1 - Φ2.

Next, we must relate the phase value to the distance vector from each antenna to the point of interest. This is done by recognizing that phase and the wavelength of the electromagnetic wave represent distance in number of wavelengths. Equation 5 relates phase to distance and wavelength.

Equation 5

Multiplication of one image and the complex conjugate of the second image on a pixel-by-pixel basis yields the phase difference between corresponding pixels in the two images. This complex product produces the interferogram I with:

Equation 6

P1 a1 ej θ1 Φ1+( )

⋅=

P2 a2 ej θ2 Φ2+( )

⋅=

Φi4πRi

λ------------=

I P1 P2'⋅=

Page 715: ERDAS Field Guide

Radar Concepts 685

Where ’ denotes the complex conjugate operation. With θ1 and θ2 nearly equal and a1 and a2 nearly equal, the two images differ primarily in how the slight difference in collection depression angles affects Φ1 and Φ2. Ideally then, each pixel in the interferogram has the form:

Equation 7

using a1 = a2 = a. The amplitude a2 of the interferogram corresponds to image intensity. The phase φ12 of the interferogram becomes

Equation 8

which is the quantity used to derive the depression angle to the point of interest relative to the baseline and, eventually, information about the scatterer height relative to the XY plane. Using the following approximation allows us to arrive at an equation relating the interferogram phase to the nominal depression angle.

Equation 9

Equation 10

In Equation 9 and Equation 10, ψ is the nominal depression angle from the center of the baseline to the scatterer relative to the baseline. No phase difference indicates that ψ = 90 degrees and the scatterer is in the plane through the center of and orthogonal to the baseline. The interferometric phase involves many radians of phase for scatterers at other depression angles since the range difference R2 - R1 is many wavelengths. In practice, however, an interferometric system does not measure the total pixel phase difference. Rather, it measures only the phase difference that remains after subtracting all full 2π intervals present (modulo-2π).

I a2 ej 4π

λ------ R1 R2–( )⎝ ⎠

⎛ ⎞–⋅ a2 e

jφ12⋅= =

φ124π R2 R1–( )

λ------------------------------=

R2 R1– Bi ψ( )cos≈

φ124πBi ψ( )cos

λ------------------------------≈

Page 716: ERDAS Field Guide

686 Radar Concepts

To estimate the actual depression angle to a particular scatterer, the interferometer must measure the total pixel phase difference of many cycles. This information is available, for instance, by unwrapping the raw interferometric phase measurements beginning at a known scene location. Phase unwrapping is discussed in further detail in Phase Unwrapping on page 692. Because of the ambiguity imposed by the wrapped phase problem, it is necessary to seek the relative depression angle and relative height among scatterers within a scene rather then their absolute depression angle and height. The differential of Equation 10 with respect to ψ provides this relative measure. This differential is

Equation 11

or

Equation 12

This result indicates that two pixels in the interferogram that differ in phase by φ12 represent scatterers differing in depression angle by Δψ. Figure 230 shows the differential collection geometry.

Figure 230: Differential Collection Geometry

ΔφT4πBi

λ------------⎝ ⎠

⎛ ⎞ ψ( )Δψsin–=

Δψ λ4πBi ψ( )sin----------------------------- Δφ12–=

A1 A2Bi

ψ

Zac

YX

Z

Δh

Zac Δh–

ψψ Δψ–

Δψ

Zacψ( )sin

-----------------

Page 717: ERDAS Field Guide

Radar Concepts 687

From this geometry, a change Δψ in depression angle is related to a change Δh in height (at the same range from mid-baseline) by Equation 13.

Equation 13

Using a useful small-angle approximation to Equation 13 and substituting Equation 12 into Equation 13 provides the result Equation 14 for Δh.

Equation 14

Note that, because we are calculating differential height, we need at least one known height value in order to calculate absolute height. This translates into a need for at least one GCP in order to calculate absolute heights from the IMAGINE InSAR process.In this section, we have derived the mathematical model needed to calculate height from interferometric phase information. In order to put this model into practice, there are several important processes that must be performed. These processes are image coregistration, phase noise reduction, phase flattening, and phase unwrapping. These processes are discussed in the following sections.

Image Coregistration In the discussion of the interferometric model of the last section, we assumed that the pixels had been identified in each image that contained the phase information for the scatterer of interest. Aligning the images from the two antennas is the purpose of the image coregistration step. For interferometric systems that employ two antennas attached by a fixed boom and collect data simultaneously, this coregistration is simple and deterministic. Given the collection geometry, the coregistration can be calculated without referring to the data. For repeat pass systems, the coregistration is not quite so simple. Since the collection geometry cannot be precisely known, we must use the data to help us achieve image coregistration.

Zac Δh–Zac ψ Δψ–( )sin

ψ( )sin---------------------------------------=

Δh Zac ψ( )Δψcot≈

ΔhλZac ψ( )cot4πBi ψ( )sin----------------------------- Δφ12–=

Page 718: ERDAS Field Guide

688 Radar Concepts

Coregistration model is based on a first order polynomial:match_sample = coefs_xx[0] + coefs_xx[1] × ref_sample + coefs_xx[2] × ref_line match_line = coefs_xy[0] + coefs_xy[1] × ref_sample + coefs_xy[2] × ref_lineWhere:

(ref_sample, ref_line) = Reference SAR image's coordinates in range and azimuth direction respectively.

(match_sample, match_line) = Corresponding match SAR image's coordinates in range and azimuth direction respectively after coregistration.

coefs_xx[3] = Array of length 3 representing polynomial coefficients, which describe the subpixel shift of the match image along Range (x-direction).

coefs_xy[3] = Array of length 3 representing polynomial coefficients, which describe the subpixel shift of the match image along Azimuth (y-direction).

The coregistration process for repeat pass interferometric systems is generally broken into two steps: pixel and sub-pixel coregistration. Pixel coregistration involves using the magnitude (visible) part of each image to remove the image misregistration down to around a pixel. This means that, after pixel coregistration, the two images are coregistered to within one or two pixels of each other in both the range and azimuth directions.Pixel coregistration is best accomplished using a standard window correlator to compare the magnitudes of the two images over a specified window. You usually specify a starting point in the two images, a window size, and a search range for the correlator to search over. The process identifies the pixel offset that produces the highest match between the two images, and therefore the best interferogram. One offset is enough to pixel coregister the two images.Pixel coregistration, in general, produces a reasonable interferogram, but not the best possible. This is because of the nature of the phase function for each of the images. In order to form an image from the original signal data collected for each image, it is required that the phase functions in range and azimuth be Nyquist sampled. Nyquist sampling simply means that the original continuous function can be reconstructed from the sampled data. This means that, while the magnitude resolution is limited to the pixel sizes (often less than that), the phase function can be reconstructed to much higher resolutions. Because it is the phase functions that ultimately provide the height information, it is important to coregister them as closely as possible. This fine coregistration of the phase functions is the goal of the sub-pixel coregistration step.

Page 719: ERDAS Field Guide

Radar Concepts 689

Sub-pixel coregistration is achieved by starting at the pixel coregistration offset and searching over upsampled versions of the phase functions for the best possible interferogram. When this best interferogram is found, the sub-pixel offset has been identified. In order to accomplish this, we must construct higher resolution phase functions from the data. In general this is done using the relation from signal processing theory shown in Equation 15.

Equation 15

Where:r = range independent variablea = azimuth independent variablei(r, a) = interferogram in spacial domainI(u, v) = interferogram in frequency domainΔr = sub-pixel range offset (that is, 0.25)Δa = sub-pixel azimuth offset (that is, 0.75)ζ-1 = inverse Fourier transform

Applying this relation directly requires two-dimensional (2D) Fourier transforms and inverse Fourier transforms for each window tested. This is impractical given the computing requirements of Fourier transforms. Fortunately, we can achieve the upsampled phase functions we need using 2D raised cosine interpolation, which involves convolving a 2D raised cosine function of a given size over our search region. Equation 16 defines the raised cosine function for one dimension.

Equation 16

Where:

i r Δr+ a Δa+,( ) ζ 1– I u v,( ) e j uΔr vΔa+( )–⋅( )[ ]=

i n( ) sinc n( ) αnπ( )cos1 4α2n2( )–-----------------------------⋅ nπ( )sin

nπ------------------- αnπ( )cos

1 4α2n2( )–-----------------------------⋅= =

α 1 Bfs---– 1 1

χ---–= =

Page 720: ERDAS Field Guide

690 Radar Concepts

in which:B = system bandwidth in megahertz, for example, ERS 15.5 MHzfs = sampling frequency in megahertz, for example, ERS 18.96 MHzχ = oversampling ratio, for example, 1.223 (possible values are in

the range of [1.0,1.4] with a step size of 0.01)Using raised cosine interpolation is a fast and efficient method of reconstructing parts of the phase functions which are at sub-pixel locations.In general, one sub-pixel offset is not enough to sub-pixel coregister two SAR images over the entire collection. Unlike the pixel coregistration, sub-pixel coregistration is dependent on the pixel location, especially the range location. For this reason, it is important to generate a sub-pixel offset function that varies with range position. Two sub-pixel offsets, one at the near range and one at the far range, are enough to generate this function. This sub-pixel coregister function provides the weights for the raised cosine interpolator needed to coregister one image to the other during the formation of the interferogram.

Phase Noise Reduction We mentioned in The Interferometric Model on page 682 that it is necessary to unwrap the phase of the interferogram before it can be used to calculate heights. From a practical and implementational point of view, the phase unwrapping step is the most difficult. We discuss phase unwrapping more in Phase Unwrapping on page 692. Before unwrapping, we can do a few things to the data that make the phase unwrapping easier. The first of these is to reduce the noise in the interferometric phase function. Phase noise is introduced by radar system noise, image misregistration, and speckle effects caused by the complex nature of the imagery. Reducing this noise is done by applying a coherent average filter of a given window size over the entire interferogram. This filter is similar to the more familiar averaging filter, except that it operates on the complex function instead of just the magnitudes. The form of this filter is given in Equation 17.

Page 721: ERDAS Field Guide

Radar Concepts 691

Equation 17

Figure 231 shows an interferometric phase image without filtering; Figure 232 shows the same phase image with filtering.

Figure 231: Interferometric Phase Image without Filtering

Figure 232: Interferometric Phase Image with Filtering

The sharp ridges that look like contour lines in Figure 231 and Figure 232 show where the phase functions wrap. The goal of the phase unwrapping step is to make this one continuous function. This is discussed in greater detail in Phase Unwrapping on page 692. Notice how the filtered image of Figure 232 is much cleaner then that of Figure 231. This filtering makes the phase unwrapping much easier.

i r a,( )

RE i r i a j+,+( )[ ] jImg r i a j+,+( )[ ]+j 0=

M

∑i 0=

N

∑M N×

--------------------------------------------------------------------------------------------------------------------=

Page 722: ERDAS Field Guide

692 Radar Concepts

Phase Flattening The phase function of Figure 232 is fairly well behaved and is ready to be unwrapped. There are relatively few wrap lines and they are distinct. Notice in the areas where the elevation is changing more rapidly (mountain regions) the frequency of the wrapping increases. In general, the higher the wrapping frequency, the more difficult the area is to unwrap. Once the wrapping frequency exceeds the spacial sampling of the phase image, information is lost. An important technique in reducing this wrapping frequency is phase flattening.Phase flattening involves removing high frequency phase wrapping caused by the collection geometry. This high frequency wrapping is mainly in the range direction, and is because of the range separation of the antennas during the collection. Recall that it is this range separation that gives the phase difference and therefore the height information. The phase function of Figure 232 has already had phase flattening applied to it. Figure 233 shows this same phase function without phase flattening applied.

Figure 233: Interferometric Phase Image without Phase Flattening

Phase flattening is achieved by removing the phase function that would result if the imaging area was flat from the actual phase function recorded in the interferogram. It is possible, using the equations derived in The Interferometric Model on page 682, to calculate this flat Earth phase function and subtract it from the data phase function.It should be obvious that the phase function in Figure 232 is easier to unwrap then the phase function of Figure 233.

Phase Unwrapping We stated in The Interferometric Model on page 682 that we must unwrap the interferometric phase before we can use it to calculate height values. In Phase Noise Reduction on page 690 and Phase Flattening on page 692, we develop methods of making the phase unwrapping job easier. This section further defines the phase unwrapping problem and describes how to solve it.As an electromagnetic wave travels through space, it cycles through its maximum and minimum phase values many times as shown in Figure 234.

Page 723: ERDAS Field Guide

Radar Concepts 693

Figure 234: Electromagnetic Wave Traveling through Space

The phase difference between points P1 and P2 is given by Equation 18.

Equation 18

Recall from Equation 8 that finding the phase difference at two points is the key to extracting height from interferometric phase. Unfortunately, an interferometric system does not measure the total pixel phase difference. Rather, it measures only the phase difference that remains after subtracting all full 2π intervals present (modulo 2π). This results in the following value for the phase difference of Equation 18.

Equation 19

Figure 235 further illustrates the difference between a one-dimensional continuous and wrapped phase function. Notice that when the phase value of the continuous function reaches 2π, the wrapped phase function returns to 0 and continues from there. The job of the phase unwrapping is to take a wrapped phase function and reconstruct the continuous function from it.

π 2π 3π 4π 5π 6π 7π

φ13π2

------= φ211π

2---------=

P1 P2

φ2 φ1– 11π2

--------- 3π2

------– 4π= =

φ2 φ1– mod2π( ) 11π2

---------⎝ ⎠⎛ ⎞ mod2π( ) 3π

2------⎝ ⎠

⎛ ⎞– 3π2

------ 3π2

------– 0= = =

Page 724: ERDAS Field Guide

694 Radar Concepts

Figure 235: One-dimensional Continuous vs. Wrapped Phase Function

There has been much research and many different methods derived for unwrapping the 2D phase function of an interferometric SAR phase image. A detailed discussion of all or any one of these methods is beyond the scope of this chapter. The most successful approaches employ algorithms which unwrap the easy or good areas first and then move on to more difficult areas. Good areas are regions in which the phase function is relatively flat and the correlation is high. This prevents errors in the tough areas from corrupting good regions. Figure 236 shows a sequence of unwrapped phase images for the phase function of Figure 232.

10π

continuous function

wrapped function

Page 725: ERDAS Field Guide

Radar Concepts 695

Figure 236: Sequence of Unwrapped Phase Images

Figure 237 shows the wrapped phase compared to the unwrapped phase image.

10% unwrapped

30% unwrapped

50% unwrapped

70% unwrapped

90% unwrapped

20% unwrapped

40% unwrapped

60% unwrapped

100% unwrapped

80% unwrapped

Page 726: ERDAS Field Guide

696 Radar Concepts

Figure 237: Wrapped vs. Unwrapped Phase Images

The unwrapped phase values can now be combined with the collection position information to calculate height values for each pixel in the interferogram.

Conclusions SAR interferometry uses the unique properties of SAR images to extract height information from SAR interferometric image pairs. Given a good image pair and good information about the collection geometry, IMAGINE InSAR can produce very high quality results. The best IMAGINE InSAR results are acquired with dual antenna systems that collect both images at once. It is also possible to do InSAR processing on repeat pass systems. These systems have the advantage of only requiring one antenna, and therefore are cheaper to build. However, the quality of repeat pass InSAR is very sensitive to the collection conditions because of the fact that the images were not collected at the same time. Weather and terrain changes that occur between the collection of the two images can greatly degrade the coherence of the image pair. This reduction in coherence makes each part of the IMAGINE InSAR process more difficult.

Wrapped phase image

Unwrapped phase image

Page 727: ERDAS Field Guide

697Math Topics

Math Topics 697

Math Topics

Introduction This appendix is a cursory overview of some of the basic mathematical concepts that are applicable to image processing. Its purpose is to educate the novice reader, and to put these formulas and concepts into the context of image processing and remote sensing applications.

Summation A commonly used notation throughout this and other discussions is the Sigma (Σ), used to denote a summation of values. For example, the notation

is the sum of all values of i, ranging from 1 to 10, which equals:

1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 = 55. Similarly, the value i may be a subscript, which denotes an ordered set of values. For example,

Where:Q1 = 3 Q2 = 5 Q3 = 7 Q4 = 2

ii 1=

10

Qi

i 1=

4

∑ 3 5 7 2+ + + 17= =

Page 728: ERDAS Field Guide

698 Math Topics

Statistics

Histogram In ERDAS IMAGINE image data files, each data file value (defined by its row, column, and band) is a variable. ERDAS IMAGINE supports the following data types:

• 1, 2, and 4-bit

• 8, 16, and 32-bit signed

• 8, 16, and 32-bit unsigned

• 32 and 64-bit floating point

• 64 and 128-bit complex floating point

Distribution, as used in statistics, is the set of frequencies with which an event occurs, or that a variable has a particular value. A histogram is a graph of data frequency or distribution. For a single band of data, the horizontal axis of a histogram is the range of all possible data file values. The vertical axis is the number of pixels that have each data value.

Figure 238: Histogram

Figure 238 shows the histogram for a band of data in which Y pixels have data value X. For example, in this graph, 300 pixels (y) have the data file value of 100 (x).

Bin Functions Bins are used to group ranges of data values together for better manageability. Histograms and other descriptor columns for 1, 2, 4, and 8-bit data are easy to handle since they contain a maximum of 256 rows. However, to have a row in a descriptor table for every possible data value in floating point, complex, and 32-bit integer data would yield an enormous amount of information. Therefore, the bin function is provided to serve as a data reduction tool.

00 255

1000

X

Y

300

100

histogram

data file values

num

ber o

f pix

els

Page 729: ERDAS Field Guide

Math Topics 699

Example of a Bin FunctionSuppose you have a floating point data layer with values ranging from 0.0 to 1.0. You could set up a descriptor table of 100 rows, with each row or bin corresponding to a data range of .01 in the layer. The bins would look like the following:

Then, for example, row 23 of the histogram table would contain the number of pixels in the layer whose value fell between .023 and .024.

Types of Bin FunctionsThe bin function establishes the relationship between data values and rows in the descriptor table. There are four types of bin functions used in ERDAS IMAGINE image layers:

• DIRECT—one bin per integer value. Used by default for 1, 2, 4, and 8-bit integer data, but may be used for other data types as well. The direct bin function may include an offset for negative data or data in which the minimum value is greater than zero.

For example, a direct bin with 900 bins and an offset of -601 would look like the following:

Bin Number Data Range

0 X < 0.01

1 0.01 ≤ X < 0.02

2 0.02 ≤ X < 0.03

.

.

.

98 0.98 ≤ X < 0.99

99 0.99 ≤ X

Bin Number Data Range

0 X ≤ -600.5

1 -600.5 < X ≤ -599.5

.

.

.

599 -2.5 < X ≤ -1.5

Page 730: ERDAS Field Guide

700 Math Topics

• LINEAR—establishes a linear mapping between data values and bin numbers, as in our first example, mapping the data range 0.0 to 1.0 to bin numbers 0 to 99.

The bin number is computed by:bin = numbins * (x - min) / (max - min)if (bin < 0) bin = 0if (bin >= numbins) bin = numbins - 1

Where:bin = resulting bin numbernumbins = number of binsx = data valuemin = lower limit (usually minimum data value)max = upper limit (usually maximum data value)

• LOG—establishes a logarithmic mapping between data values and bin numbers. The bin number is computed by:

bin = numbins * (ln (1.0 + ((x - min)/(max - min)))/ ln (2.0))if (bin < 0) bin = 0if (bin >= numbins) bin = numbins - 1

• EXPLICIT—explicitly defines mapping between each bin number and data range.

Mean The mean (μ) of a set of values is its statistical average, such that, if Qi represents a set of k values:

600 -1.5 < X ≤ -0.5

601 -0.5 < X < 0.5

602 0.5 ≤ X < 1.5

603 1.5 ≤ X < 2.5

.

.

.

898 296.5 ≤ X < 297.5

899 297.5 ≤ X

Bin Number Data Range

Page 731: ERDAS Field Guide

Math Topics 701

or

The mean of data with a normal distribution is the value at the peak of the curve—the point where the distribution balances.

Normal Distribution Our general ideas about an average, whether it be average age, average test score, or the average amount of spectral reflectance from oak trees in the spring, are made visible in the graph of a normal distribution, or bell curve.

Figure 239: Normal Distribution

Average usually refers to a central value on a bell curve, although all distributions have averages. In a normal distribution, most values are at or near the middle, as shown by the peak of the bell curve. Values that are more extreme are more rare, as shown by the tails at the ends of the curve. The Normal Distributions are a family of bell shaped distributions that turn up frequently under certain special circumstances. For example, a normal distribution would occur if you were to compare the bands in a desert image. The bands would be very similar, but would vary slightly. Each Normal Distribution uses just two parameters, σ and μ, to control the shape and location of the resulting probability graph through the equation:

μQ1 Q2 Q3 ... Qk+ + + +

k---------------------------------------------------------=

μQik

-----i 1=

k

∑=

data file values0 255num

ber o

f pix

els

0

1000

Page 732: ERDAS Field Guide

702 Math Topics

Where:x = the quantity’s distribution that is being approximatedπ and e = famous mathematical constants

The parameter μ controls how much the bell is shifted horizontally so that its average matches the average of the distribution of x, while σ adjusts the width of the bell to try to encompass the spread of the given distribution. In choosing to approximate a distribution by the nearest of the Normal Distributions, we describe the many values in the bin function of its distribution with just two parameters. It is a significant simplification that can greatly ease the computational burden of many operations, but like all simplifications, it reduces the accuracy of the conclusions we can draw.The normal distribution is the most widely encountered model for probability. Many natural phenomena can be predicted or estimated according to the law of averages that is implied by the bell curve (Larsen and Marx, 1981). A normal distribution in remotely sensed data is meaningful—it is a sign that some characteristic of an object can be measured by the average amount of electromagnetic radiation that the object reflects. This relationship between the data and a physical scene or object is what makes image processing applicable to various types of land analysis. The mean and standard deviation are often used by computer programs that process and analyze image data.

Variance The mean of a set of values locates only the average value—it does not adequately describe the set of values by itself. It is helpful to know how much the data varies from its mean. However, a simple average of the differences between each value and the mean equals zero in every case, by definition of the mean. Therefore, the squares of these differences are averaged so that a meaningful number results (Larsen and Marx, 1981).In theory, the variance is calculated as follows:

Where:E = expected value (weighted average)2 = squared to make the distance a positive number

f x( ) ex μ–2σ

------------⎝ ⎠⎛ ⎞

2–

σ 2π----------------------=

VarQ E Q μQ–( )2⟨ ⟩=

Page 733: ERDAS Field Guide

Math Topics 703

In practice, the use of this equation for variance does not usually reflect the exact nature of the values that are used in the equation. These values are usually only samples of a large data set, and therefore, the mean and variance of the entire data set are estimated, not known. The equation used in practice follows. This is called the minimum variance unbiased estimator of the variance, or the sample variance (notated σ2).

Where:i = a particular pixelk = the number of pixels (the higher the number, the better the

approximation)The theory behind this equation is discussed in chapters on point estimates and sufficient statistics, and covered in most statistics texts.

NOTE: The variance is expressed in units squared (e.g., square inches, square data values, etc.), so it may result in a number that is much higher than any of the original values.

Standard Deviation Since the variance is expressed in units squared, a more useful value is the square root of the variance, which is expressed in units and can be related back to the original values (Larsen and Marx, 1981). The square root of the variance is the standard deviation.

Based on the equation for sample variance (σ2), the sample standard deviation (σQ) for a set of values Q is computed as follows:

In any distribution:

• approximately 68% of the values are within one standard deviation of μ, that is, between μ-σ and μ+σ

• more than 1/2 of the values are between μ-2σ and μ+2σ

• more than 3/4 of the values are between μ-3σ and μ+3σ

σQ2

Qi μQ–( )2

i 1=

k

∑k 1–

-----------------------------------≈

Page 734: ERDAS Field Guide

704 Math Topics

Source: Mendenhall and Scheaffer, 1973An example of a simple application of these rules is seen in the ERDAS IMAGINE Viewer. When 8-bit data are displayed in the Viewer, ERDAS IMAGINE can apply via the General Contrast tool a 2 standard deviation stretch that remaps all data file values between μ-2σ and μ+2σ (more than 1/2 of the data) to the range of possible brightness values on the display device. Standard deviations are used because the lowest and highest data file values may be much farther from the mean than 2σ.

For more information on contrast stretch, see "Enhancement" on page 455.

Parameters As described above, the standard deviation describes how a fixed percentage of the data varies from the mean. The mean and standard deviation are known as parameters, which are sufficient to describe a normal curve (Johnston, 1980). When the mean and standard deviation are known, they can be used to estimate other calculations about the data. In computer programs, it is much more convenient to estimate calculations with a mean and standard deviation than it is to repeatedly sample the actual data. Algorithms that use parameters are parametric. The closer that the distribution of the data resembles a normal curve, the more accurate the parametric estimates of the data are. ERDAS IMAGINE classification algorithms that use signature files (.sig) are parametric, since the mean and standard deviation of each sample or cluster are stored in the file to represent the distribution of the values.

Covariance In many image processing procedures, the relationships between two bands of data are important. Covariance measures the tendencies of data file values in the same pixel, but in different bands, to vary with each other, in relation to the means of their respective bands. These bands must be linear.

σQ

Qi μQ–( )2

i 1=

k

∑k 1–

-----------------------------------=

Page 735: ERDAS Field Guide

Math Topics 705

Theoretically speaking, whereas variance is the average square of the differences between values and their mean in one band, covariance is the average product of the differences of corresponding values in two different bands from their respective means. Compare the following equation for covariance to the previous one for variance:

Where:Q and R = data file values in two bands E = expected value

In practice, the sample covariance is computed with this equation:

Where:i = a particular pixelk = the number of pixels

Like variance, covariance is expressed in units squared.

Covariance Matrix The covariance matrix is an n × n matrix that contains all of the variances and covariances within n bands of data. Below is an example of a covariance matrix for four bands of data:

The covariance matrix is symmetrical—for example, CovAB = CovBA.

The covariance of one band of data with itself is the variance of that band:

CovQR E Q μQ–( ) R μR–( )⟨ ⟩=

CQR

Qi μQ–( ) Ri μR–( )

i 1=

k

∑k

--------------------------------------------------------≈

band A band B band C band D

band A VarA CovBA CovCA CovDA

band B CovAB VarB CovCB CovDB

band C CovAC CovBC VarC CovDC

band D CovAD CovBD CovCD VarD

Page 736: ERDAS Field Guide

706 Math Topics

Therefore, the diagonal of the covariance matrix consists of the band variances. The covariance matrix is an organized format for storing variance and covariance information on a computer system, so that it needs to be computed only once. Also, the matrix itself can be used in matrix equations, as in principal components analysis.

See Matrix Algebra on page 712 for more information on matrices.

Dimensionality of Data

Spectral Dimensionality is determined by the number of sets of values being used in a process. In image processing, each band of data is a set of values. An image with four bands of data is said to be four-dimensional (Jensen, 1996).

NOTE: The letter n is used consistently in this documentation to stand for the number of dimensions (bands) of image data.

Measurement Vector The measurement vector of a pixel is the set of data file values for one pixel in all n bands. Although image data files are stored band-by-band, it is often necessary to extract the measurement vectors for individual pixels.

Figure 240: Measurement Vector

CQQ

Qi μQ–( ) Qi μQ–( )

i 1=

k

∑k 1–

---------------------------------------------------------

Qi μQ–( )2

i 1=

k

∑k 1–

-----------------------------------= =

1 pixel

Band 1

Band 2

Band 3

V1

V2

V3

n = 3

Page 737: ERDAS Field Guide

Math Topics 707

According to Figure 240,i = particular band Vi = the data file value of the pixel in band i, then the

measurement vector for this pixel is:

See Matrix Algebra on page 712 for an explanation of vectors.

Mean Vector When the measurement vectors of several pixels are analyzed, a mean vector is often calculated. This is the vector of the means of the data file values in each band. It has n elements.

Figure 241: Mean Vector

According to Figure 241,i = a particular band μi = the mean of the data file values of the pixels being

studied, in band i, then the mean vector for this training sample is:

V1

V2

V3

Band 1

Band 2

Band 3

Training samplemean of values in samplein band 1 = μ1mean of these values

= μ2mean of these values= μ3

μ1

μ2

μ3

Page 738: ERDAS Field Guide

708 Math Topics

Feature Space Many algorithms in image processing compare the values of two or more bands of data. The programs that perform these functions abstractly plot the data file values of the bands being studied against each other. An example of such a plot in two dimensions (two bands) is illustrated in Figure 242.

Figure 242: Two Band Plot

NOTE: If the image is 2-dimensional, the plot does not always have to be 2-dimensional.

In Figure 242, the pixel that is plotted has a measurement vector of:

The graph above implies physical dimensions for the sake of illustration. Actually, these dimensions are based on spectral characteristics represented by the digital image data. As opposed to physical space, the pixel above is plotted in feature space. Feature space is an abstract space that is defined by spectral units, such as an amount of electromagnetic radiation.

Feature Space Images Several techniques for the processing of multiband data make use of a two-dimensional histogram, or feature space image. This is simply a graph of the data file values of one band of data against the values of another band.

Band Adata file values

Ban

d B

data

file

val

ues

0 255

255

0

85

180

(180, 85)

18085

Page 739: ERDAS Field Guide

Math Topics 709

Figure 243: Two-band Scatterplot

The scatterplot pictured in Figure 243 can be described as a simplification of a two-dimensional histogram, where the data file values of one band have been plotted against the data file values of another band. This figure shows that when the values in the bands being plotted have jointly normal distributions, the feature space forms an ellipse. This ellipse is used in several algorithms—specifically, for evaluating training samples for image classification. Also, two-dimensional feature space images with ellipses are helpful to illustrate principal components analysis.

See "Enhancement" on page 455 for more information on principal components analysis, "Classification" on page 545 for information on training sample evaluation, and "Rectification" on page 251 for more information on orders of transformation.

n-Dimensional Histogram

If two-dimensional data can be plotted on a two-dimensional histogram, as above, then n-dimensional data can, abstractly, be plotted on an n-dimensional histogram, defining n-dimensional spectral space. Each point on an n-dimensional scatterplot has n coordinates in that spectral space—a coordinate for each axis. The n coordinates are the elements of the measurement vector for the corresponding pixel. In some image enhancement algorithms (most notably, principal components), the points in the scatterplot are replotted, or the spectral space is redefined in such a way that the coordinates are changed, thus transforming the measurement vector of the pixel.

Band Adata file values

Ban

d B

data

file

val

ues

0 255

255

0

Page 740: ERDAS Field Guide

710 Math Topics

When all data sets (bands) have jointly normal distributions, the scatterplot forms a hyperellipsoid. The prefix “hyper” refers to an abstract geometrical shape, which is defined in more than three dimensions.

NOTE: In this documentation, 2-dimensional examples are used to illustrate concepts that apply to any number of dimensions of data. The 2-dimensional examples are best suited for creating illustrations to be printed.

Spectral Distance Euclidean Spectral distance is distance in n-dimensional spectral space. It is a number that allows two measurement vectors to be compared for similarity. The spectral distance between two pixels can be calculated as follows:

Where:D = spectral distancen = number of bands (dimensions) i = a particular band di = data file value of pixel d in band i ei = data file value of pixel e in band i

This is the equation for Euclidean distance—in two dimensions (when n = 2), it can be simplified to the Pythagorean Theorem (c2 = a2 + b2), or in this case:

D2 = (di - ei)2 + (dj - ej)

2

Polynomials A polynomial is a mathematical expression consisting of variables and coefficients. A coefficient is a constant, which is multiplied by a variable in the expression.

Order The variables in polynomial expressions can be raised to exponents. The highest exponent in a polynomial determines the order of the polynomial. A polynomial with one variable, x, takes this form:

A + Bx + Cx2 + Dx3 + .... + Ωxt

D di ei–( )2

i 1=

n

∑=

Page 741: ERDAS Field Guide

Math Topics 711

Where:A, B, C, D ... Ω = coefficients t = the order of the polynomial

NOTE: If one or all of A, B, C, D ... are 0, then the nature, but not the complexity, of the transformation is changed. Mathematically, Ω cannot be 0.

A polynomial with two variables, x and y, takes this form:

Where:t is the order of the polynomialak and bk are coefficientsthe subscript k is determined by:

A numerical example of 3rd-order transformation equations for x and y is:

xo = 5 + 4x - 6y + 10x2 - 5xy + 1y2 + 3x3 + 7x2y - 11xy2 + 4y3

yo = 13 + 12x + 4y + 1x2 - 21xy + 1y2 - 1x3 + 2x2y + 5xy2 + 12y3

Polynomial equations are used in image rectification to transform the coordinates of an input file to the coordinates of another system. The order of the polynomial used in this process is the order of transformation.

Transformation Matrix In the case of first order image rectification, the variables in the polynomials (x and y) are the source coordinates of a GCP. The coefficients are computed from the GCPs and stored as a transformation matrix.

xo

i o=⎝ ⎠⎜ ⎟⎜ ⎟⎛ ⎞ i

Σj o=⎝ ⎠

⎜ ⎟⎜ ⎟⎛ ⎞

= ak xi j–× yj×

yo

i o=⎝ ⎠⎜ ⎟⎜ ⎟⎛ ⎞ i

Σj o=⎝ ⎠

⎜ ⎟⎜ ⎟⎛ ⎞

= bk xi j–× yj×

k i i j+⋅2

--------------- j+=

Page 742: ERDAS Field Guide

712 Math Topics

A detailed discussion of GCPs, orders of transformation, and transformation matrices is included in "Rectification" on page 251.

Matrix Algebra A matrix is a set of numbers or values arranged in a rectangular array. If a matrix has i rows and j columns, it is said to be an i by j matrix. A one-dimensional matrix, having one column (i by 1) is one of many kinds of vectors. For example, the measurement vector of a pixel is an n-element vector of the data file values of the pixel, where n is equal to the number of bands.

See "Enhancement" on page 455 for information on eigenvectors.

Matrix Notation Matrices and vectors are usually designated with a single capital letter, such as M. For example:

One value in the matrix M would be specified by its position, which is its row and column (in that order) in the matrix. One element of the array (one value) is designated with a lower case letter and its position:

m3,2 = 12.4 With column vectors, it is simpler to use only one number to designate the position:

M2.2 4.66.1 8.310.0 12.4

=

Page 743: ERDAS Field Guide

Math Topics 713

G2 = 6.5

Matrix Multiplication A simple example of the application of matrix multiplication is a 1st-order transformation matrix. The coefficients are stored in a 2 × 3 matrix:

Then, where:xo = a1 + a2xi + a3yi yo = b1 + b2xi + b3yi xi and yi = source coordinates+xo and yo = rectified coordinates

The coefficients of the transformation matrix are as above.The above could be expressed by a matrix equation:

R = CS, orWhere:

S = a matrix of the source coordinates (3 by 1)C = the transformation matrix (2 by 3)R = the matrix of rectified coordinates (2 by 1)

G2.86.510.1

=

Ca1 a2 a3

b1 b2 b3

=

x0

y0

a1 a2 a3

b1 b2 b3

1xi

yi

=

Page 744: ERDAS Field Guide

714 Math Topics

The sizes of the matrices are shown above to demonstrate a rule of matrix multiplication. To multiply two matrices, the first matrix must have the same number of columns as the second matrix has rows. For example, if the first matrix is a by b, and the second matrix is m by n, then b must equal m, and the product matrix has the size a by n. The formula for multiplying two matrices is:

for every i from 1 to a

for every j from 1 to n

Where:i = a row in the product matrixj = a column in the product matrixf = an (a by b) matrix g = an (m by n) matrix (b must equal m)

fg is an a by n matrix.

Transposition The transposition of a matrix is derived by interchanging its rows and columns. Transposition is denoted by T, as in the following example (Cullen, 1972).

fg( )ij fikgkj

k 1=

m

∑=

G2 36 410 12

=

Page 745: ERDAS Field Guide

Math Topics 715

For more information on transposition, see Computing Principal Components on page 495 and Classification Decision Rules on page 573.

GT 2 6 103 4 12

=

Page 746: ERDAS Field Guide

716 Math Topics

Page 747: ERDAS Field Guide

717Glossary

Glossary 717

Glossary

Numerics 2D—two-dimensional.

3D—three-dimensional.

A absorption spectra—the electromagnetic radiation wavelengths that are absorbed by specific materials of interest.

abstract symbol—an annotation symbol that has a geometric shape, such as a circle, square, or triangle. These symbols often represent amounts that vary from place to place, such as population density, yearly rainfall, etc.

a priori—already or previously known.

accuracy assessment—the comparison of a classification to geographical data that is assumed to be true. Usually, the assumed-true data are derived from ground truthing.

accuracy report—in classification accuracy assessment, a list of the percentages of accuracy, which is computed from the error matrix.

ACS—see attitude control system.

active sensors—the solar imaging sensors that both emit and receive radiation.

ADRG—see ARC Digitized Raster Graphic.

ADRI—see ARC Digital Raster Imagery.

aerial stereopair—two photos taken at adjacent exposure stations.

Airborne Synthetic Aperture Radar—an experimental airborne radar sensor developed by Jet Propulsion Laboratories (JPL), Pasadena, California, under a contract with NASA. AIRSAR data have been available since 1983.

Airborne Visible/Infrared Imaging Spectrometer—(AVIRIS) a sensor developed by JPL (Pasadena, California) under a contract with NASA that produces multispectral data with 224 narrow bands. These bands are 10 nm wide and cover the spectral range of 0.4-2.4 nm. AVIRIS data have been available since 1987.

AIRSAR—see Airborne Synthetic Aperture Radar.

Page 748: ERDAS Field Guide

718 Glossary

alarm—a test of a training sample, usually used before the signature statistics are calculated. An alarm highlights an area on the display that is an approximation of the area that would be classified with a signature. The original data can then be compared to the highlighted area.

Almaz—a Russian radar satellite that completed its mission in 1992.

Along-Track Scanning Radiometer—(ATSR) instrument aboard the European Space Agency’s ERS-1 and ERS-2 satellites, which detects changes in the amount of vegetation on the Earth’s surface.

American Standard Code for Information Interchange—(ASCII) a “basis of character sets. . .to convey some control codes, space, numbers, most basic punctuation, and unaccented letters a-z and A-Z” (Free On-Line Dictionary of Computing, 1999a).

analog photogrammetry—optical or mechanical instruments used to reconstruct three-dimensional geometry from two overlapping photographs.

analytical photogrammetry—the computer replaces optical and mechanical components by substituting analog measurement and calculation with mathematical computation.

ancillary data—the data, other than remotely sensed data, that are used to aid in the classification process.

ANN—see Artificial Neural Networks.

annotation—the explanatory material accompanying an image or map. In ERDAS IMAGINE, annotation consists of text, lines, polygons, ellipses, rectangles, legends, scale bars, grid lines, tick marks, neatlines, and symbols that denote geographical features.

annotation layer—a set of annotation elements that is drawn in a Viewer or Map Composer window and stored in a file (.ovr extension).

AOI—see area of interest.

arc—see line.

ARC system (Equal Arc-Second Raster Chart/Map)—a system that provides a rectangular coordinate and projection system at any scale for the Earth’s ellipsoid, based on the World Geodetic System 1984 (WGS 84).

Page 749: ERDAS Field Guide

Glossary 719

ARC Digital Raster Imagery—Defense Mapping Agency (DMA) data that consist of SPOT panchromatic, SPOT multispectral, or Landsat TM satellite imagery transformed into the ARC system and accompanied by ASCII encoded support files. These data are available only to Department of Defense contractors.

ARC Digitized Raster Graphic—data from the Defense Mapping Agency (DMA) that consist of digital copies of DMA hardcopy graphics transformed into the ARC system and accompanied by ASCII encoded support files. These data are primarily used for military purposes by defense contractors.

ARC GENERATE data—vector data created with the ArcInfo UNGENERATE command.

arc/second—a unit of measure that can be applied to data in the Lat/Lon coordinate system. Each pixel represents the distance covered by one second of latitude or longitude. For example, in 3 arc/second data, each pixel represents an area three seconds latitude by three seconds longitude.

area—a measurement of a surface.

area based matching—an image matching technique that determines the correspondence between two image areas according to the similarity of their gray level values.

area of interest —(AOI) a point, line, or polygon that is selected as a training sample or as the image area to be used in an operation. AOIs can be stored in separate .aoi files.

Artificial Neural Networks—(ANN) data classifiers that may process hyperspectral images with a large number of bands.

ASCII—see American Standard Code for Information Interchange.

aspect—the orientation, or the direction that a surface faces, with respect to the directions of the compass: north, south, east, west.

aspect image—a thematic raster image that shows the prevailing direction that each pixel faces.

aspect map—a map that is color coded according to the prevailing direction of the slope at each pixel.

ATSR—see Along-Track Scanning Radiometer.

attitude control system—(ACS) system used by SeaWiFS instrument to sustain orbit, conduct lunar and solar calibration procedures, and supply attitude information within one SeaWiFS pixel (National Aeronautics and Space Administration, 1999).

attribute—the tabular information associated with a raster or vector layer.

Page 750: ERDAS Field Guide

720 Glossary

average—the statistical mean; the sum of a set of values divided by the number of values in the set.

AVHRR—Advanced Very High Resolution Radiometer data. Small-scale imagery produced by an NOAA polar orbiting satellite. It has a spatial resolution of 1.1 × 1.1 km or 4 × 4 km.

AVIRIS—see Airborne Visible/Infrared Imaging Spectrometer.

azimuth—an angle measured clockwise from a meridian, going north to east.

azimuthal projection—a map projection that is created from projecting the surface of the Earth to the surface of a plane.

B band—a set of data file values for a specific portion of the electromagnetic spectrum of reflected light or emitted heat (red, green, blue, near-infrared, infrared, thermal, etc.), or some other user-defined information created by combining or enhancing the original bands, or creating new bands from other sources. Sometimes called channel.

banding—see striping.

base map—a map portraying background reference information onto which other information is placed. Base maps usually show the location and extent of natural surface features and permanent human-made features.

Basic Image Interchange Format—(BIIF) the basis for the NITFS format.

batch file—a file that is created in the Batch mode of ERDAS IMAGINE. All steps are recorded for a later run. This file can be edited.

batch mode—a mode of operating ERDAS IMAGINE in which steps are recorded for later use.

bathymetric map—a map portraying the shape of a water body or reservoir using isobaths (depth contours).

Bayesian—a variation of the maximum likelihood classifier, based on the Bayes Law of probability. The Bayesian classifier allows the application of a priori weighting factors, representing the probabilities that pixels are assigned to each class.

BIIF—see Basic Image Interchange Format.

Page 751: ERDAS Field Guide

Glossary 721

BIL—band interleaved by line. A form of data storage in which each record in the file contains a scan line (row) of data for one band. All bands of data for a given line are stored consecutively within the file.

bilinear interpolation—a resampling method that uses the data file values of four pixels in a 2 × 2 window to calculate an output data file value by computing a weighted average of the input data file values with a bilinear function.

bin function—a mathematical function that establishes the relationship between data file values and rows in a descriptor table.

bins—ordered sets of pixels. Pixels are sorted into a specified number of bins. The pixels are then given new values based upon the bins to which they are assigned.

BIP—band interleaved by pixel. A form of data storage in which the values for each band are ordered within a given pixel. The pixels are arranged sequentially on the tape.

bit—a binary digit, meaning a number that can have two possible values 0 and 1, or off and on. A set of bits, however, can have many more values, depending upon the number of bits used. The number of values that can be expressed by a set of bits is 2 to the power of the number of bits used. For example, the number of values that can be expressed by 3 bits is 8 (23 = 8).

block of photographs—formed by the combined exposures of a flight. The block consists of a number of parallel strips with a sidelap of 20-30%.

blocked—a method of storing data on 9-track tapes so that there are more logical records in each physical record.

blocking factor—the number of logical records in each physical record. For instance, a record may contain 28,000 bytes, but only 4,000 columns due to a blocking factor of 7.

book map—a map laid out like the pages of a book. Each page fits on the paper used by the printer. There are neatlines and tick marks on all sides of every page.

Boolean—logical, based upon, or reducible to a true or false condition.

border—on a map, a line that usually encloses the entire map, not just the image area as does a neatline.

boundary—a neighborhood analysis technique that is used to detect boundaries between thematic classes.

bpi—bits per inch. A measure of data storage density for magnetic tapes.

Page 752: ERDAS Field Guide

722 Glossary

breakline—an elevation polyline in which each vertex has its own X, Y, Z value.

brightness value—the quantity of a primary color (red, green, blue) to be output to a pixel on the display device. Also called intensity value, function memory value, pixel value, display value, and screen value.

BSQ—band sequential. A data storage format in which each band is contained in a separate file.

buffer zone—a specific area around a feature that is isolated for or from further analysis. For example, buffer zones are often generated around streams in site assessment studies so that further analyses exclude these areas that are often unsuitable for development.

build—the process of constructing the topology of a vector layer by processing points, lines, and polygons. See clean.

bundle—the unit of photogrammetric triangulation after each point measured in an image is connected with the perspective center by a straight light ray. There is one bundle of light rays for each image.

bundle attitude—defined by a spatial rotation matrix consisting of three angles (κ, ω, ϕ).

bundle location—defined by the perspective center, expressed in units of the specified map projection.

byte—8 bits of data.

C CAC—see Compressed Aeronautical Chart.

CAD—see computer-aided design.

cadastral map—a map showing the boundaries of the subdivisions of land for purposes of describing and recording ownership or taxation.

CADRG—see Compressed ADRG.

calibration certificate/report—in aerial photography, the manufacturer of the camera specifies the interior orientation in the form of a certificate or report.

Cartesian—a coordinate system in which data are organized on a grid and points on the grid are referenced by their X,Y coordinates.

cartography—the art and science of creating maps.

categorical data—see thematic data.

Page 753: ERDAS Field Guide

Glossary 723

CCT—see computer compatible tape.

CD-ROM—a read-only storage device read by a CD-ROM player.

cell—1. a 1° × 1° area of coverage. DTED (Digital Terrain Elevation Data) are distributed in cells. 2. a pixel; grid cell.

cell size—the area that one pixel represents, measured in map units. For example, one cell in the image may represent an area 30’ × 30’ on the ground. Sometimes called pixel size.

center of the scene—the center pixel of the center scan line; the center of a satellite image.

central processing unit—(CPU) “the part of a computer which controls all the other parts. . .the CPU consists of the control unit, the arithmetic and logic unit (ALU) and memory (registers, cache, RAM and ROM) as well as various temporary buffers and other logic” (Free On-Line Dictionary of Computing, 1999b).

character—a number, letter, or punctuation symbol. One character usually occupies one byte when stored on a computer.

check point—additional ground points used to independently verify the degree of accuracy of a triangulation.

check point analysis—the act of using check points to independently verify the degree of accuracy of a triangulation.

chi-square distribution—a nonsymmetrical data distribution: its curve is characterized by a tail that represents the highest and least frequent data values. In classification thresholding, the tail represents the pixels that are most likely to be classified incorrectly.

choropleth map—a map portraying properties of a surface using area symbols. Area symbols usually represent categorized classes of the mapped phenomenon.

CIB—see Controlled Image Base.

city-block distance—the physical or spectral distance that is measured as the sum of distances that are perpendicular to one another.

class—a set of pixels in a GIS file that represents areas that share some condition. Classes are usually formed through classification of a continuous raster layer.

class value—a data file value of a thematic file that identifies a pixel as belonging to a particular class.

classification—the process of assigning the pixels of a continuous raster image to discrete categories.

Page 754: ERDAS Field Guide

724 Glossary

classification accuracy table—for accuracy assessment, a list of known values of reference pixels, supported by some ground truth or other a priori knowledge of the true class, and a list of the classified values of the same pixels, from a classified file to be tested.

classification scheme—(or classification system) a set of target classes. The purpose of such a scheme is to provide a framework for organizing and categorizing the information that can be extracted from the data.

clean—the process of constructing the topology of a vector layer by processing lines and polygons. See build.

client—on a computer on a network, a program that accesses a server utility that is on another machine on the network.

clump—a contiguous group of pixels in one class. Also called raster region.

clustering—unsupervised training; the process of generating signatures based on the natural groupings of pixels in image data when they are plotted in spectral space.

clusters—the natural groupings of pixels when plotted in spectral space.

CMY—cyan, magenta, yellow. Primary colors of pigment used by printers, whereas display devices use RGB.

CNES—Centre National d’Etudes Spatiales. The corporation was founded in 1961. It provides support for ESA. CNES suggests and executes programs (Centre National D’Etudes Spatiales, 1998).

coefficient—one number in a matrix, or a constant in a polynomial expression.

coefficient of variation—a scene-derived parameter that is used as input to the Sigma and Local Statistics radar enhancement filters.

collinearity—a nonlinear mathematical model that photogrammetric triangulation is based upon. Collinearity equations describe the relationship among image coordinates, ground coordinates, and orientation parameters.

colorcell—the location where the data file values are stored in the colormap. The red, green, and blue values assigned to the colorcell control the brightness of the color guns for the displayed pixel.

Page 755: ERDAS Field Guide

Glossary 725

color guns—on a display device, the red, green, and blue phosphors that are illuminated on the picture tube in varying brightnesses to create different colors. On a color printer, color guns are the devices that apply cyan, yellow, magenta, and sometimes black ink to paper.

colormap—an ordered set of colorcells, which is used to perform a function on a set of input values.

color printer—a printer that prints color or black and white imagery, as well as text. ERDAS IMAGINE supports several color printers.

color scheme—a set of lookup tables that assigns red, green, and blue brightness values to classes when a layer is displayed.

composite map—a map on which the combined information from different thematic maps is presented.

Compressed ADRG—(CADRG) a military data product based upon the general RPF specification.

Compressed Aeronautical Chart—(CAC) precursor to CADRG.

Compressed Raster Graphics—(CRG) precursor to CADRG.

compromise projection—a map projection that compromises among two or more of the map projection properties of conformality, equivalence, equidistance, and true direction.

computer-aided design—(CAD) computer application used for design and GPS survey.

computer compatible tape—(CCT) a magnetic tape used to transfer and store digital data.

confidence level—the percentage of pixels that are believed to be misclassified.

conformal—a map or map projection that has the property of conformality, or true shape.

conformality—the property of a map projection to represent true shape, wherein a projection preserves the shape of any small geographical area. This is accomplished by exact transformation of angles around points.

conic projection—a map projection that is created from projecting the surface of the Earth to the surface of a cone.

connectivity radius—the distance (in pixels) that pixels can be from one another to be considered contiguous. The connectivity radius is used in connectivity analysis.

Page 756: ERDAS Field Guide

726 Glossary

contiguity analysis—a study of the ways in which pixels of a class are grouped together spatially. Groups of contiguous pixels in the same class, called raster regions, or clumps, can be identified by their sizes and manipulated.

contingency matrix—a matrix that contains the number and percentages of pixels that were classified as expected.

continuous—a term used to describe raster data layers that contain quantitative and related values. See continuous data.

continuous data—a type of raster data that are quantitative (measuring a characteristic) and have related, continuous values, such as remotely sensed images (e.g., Landsat, SPOT, etc.).

contour map—a map in which a series of lines connects points of equal elevation.

contrast stretch—the process of reassigning a range of values to another range, usually according to a linear function. Contrast stretching is often used in displaying continuous raster layers, since the range of data file values is usually much narrower than the range of brightness values on the display device.

control point—a point with known coordinates in the ground coordinate system, expressed in the units of the specified map projection.

Controlled Image Base—(CIB) a military data product based upon the general RPF specification.

convolution filtering—the process of averaging small sets of pixels across an image. Used to change the spatial frequency characteristics of an image.

convolution kernel—a matrix of numbers that is used to average the value of each pixel with the values of surrounding pixels in a particular way. The numbers in the matrix serve to weight this average toward particular pixels.

coordinate system—a method of expressing location. In two-dimensional coordinate systems, locations are expressed by a column and row, also called x and y.

correlation threshold—a value used in rectification to determine whether to accept or discard GCPs. The threshold is an absolute value threshold ranging from 0.000 to 1.000.

correlation windows—windows that consist of a local neighborhood of pixels. One example is square neighborhoods (e.g., 3 × 3, 5 × 5, 7 × 7 pixels).

Page 757: ERDAS Field Guide

Glossary 727

corresponding GCPs—the GCPs that are located in the same geographic location as the selected GCPs, but are selected in different files.

covariance—measures the tendencies of data file values for the same pixel, but in different bands, to vary with each other in relation to the means of their respective bands. These bands must be linear. Covariance is defined as the average product of the differences between the data file values in each band and the mean of each band.

covariance matrix—a square matrix that contains all of the variances and covariances within the bands in a data file.

CPU— see central processing unit.

credits—on maps, the text that can include the data source and acquisition date, accuracy information, and other details that are required for or helpful to readers.

CRG—see Compressed Raster Graphics.

crisp filter—a filter used to sharpen the overall scene luminance without distorting the interband variance content of the image.

cross correlation—a calculation that computes the correlation coefficient of the gray values between the template window and the search window.

cubic convolution—a method of resampling that uses the data file values of sixteen pixels in a 4 × 4 window to calculate an output data file value with a cubic function.

current directory—also called default directory, it is the directory that you are in. It is the default path.

cylindrical projection—a map projection that is created from projecting the surface of the Earth to the surface of a cylinder.

D dangling node—a line that does not close to form a polygon, or that extends past an intersection.

data—1. in the context of remote sensing, a computer file containing numbers that represent a remotely sensed image, and can be processed to display that image. 2. a collection of numbers, strings, or facts that requires some processing before it is meaningful.

database (one word)—a relational data structure usually used to store tabular information. Examples of popular databases include SYBASE, dBase, Oracle, INFO, etc.

Page 758: ERDAS Field Guide

728 Glossary

data base (two words)—in ERDAS IMAGINE, a set of continuous and thematic raster layers, vector layers, attribute information, and other kinds of data that represent one area of interest. A data base is usually part of a GIS.

data file—a computer file that contains numbers that represent an image.

data file value—each number in an image file. Also called file value, image file value, DN, brightness value, pixel.

datum—see reference plane.

DCT—see Discrete Cosine Transformation.

decision rule—an equation or algorithm that is used to classify image data after signatures have been created. The decision rule is used to process the data file values based upon the signature statistics.

decorrelation stretch—a technique used to stretch the principal components of an image, not the original image.

default directory—see current directory.

Defense Mapping Agency—(DMA) agency that supplies VPF, ARC digital raster, DRG, ADRG, and DTED files.

degrees of freedom—when chi-square statistics are used in thresholding, the number of bands in the classified file.

DEM—see digital elevation model.

densify—the process of adding vertices to selected lines at a user-specified tolerance.

density—1. the number of bits per inch on a magnetic tape. 9-track tapes are commonly stored at 1600 and 6250 bpi. 2. a neighborhood analysis technique that outputs the number of pixels that have the same value as the analyzed pixel in a user-specified window.

derivative map—a map created by altering, combining, or analyzing other maps.

descriptor—see attribute.

desktop scanners—general purpose devices that lack the image detail and geometric accuracy of photogrammetric quality units, but are much less expensive.

detector—the device in a sensor system that records electromagnetic radiation.

developable surface—a flat surface, or a surface that can be easily flattened by being cut and unrolled, such as the surface of a cone or a cylinder.

Page 759: ERDAS Field Guide

Glossary 729

DFT—see Discrete Fourier Transform.

DGPS—see Differential Correction.

Differential Correction—(DGPS) can be used to remove the majority of the effects of Selective Availability.

digital elevation model—(DEM) continuous raster layers in which data file values represent elevation. DEMs are available from the USGS at 1:24,000 and 1:250,000 scale, and can be produced with terrain analysis programs, IMAGINE InSAR, IMAGINE OrthoMAX™, and IMAGINE StereoSAR DEM.

Digital Number—(DN) variation in pixel intensity due to composition of what it represents. For example, the DN of water is different from that of land. DN is expressed in a value—typically from 0-255.

digital orthophoto—an aerial photo or satellite scene that has been transformed by the orthogonal projection, yielding a map that is free of most significant geometric distortions.

digital orthophoto quadrangle—(DOQ) a computer-generated image of an aerial photo (United States Geological Survey, 1999b).

digital photogrammetry—photogrammetry as applied to digital images that are stored and processed on a computer. Digital images can be scanned from photographs or can be directly captured by digital cameras.

Digital Line Graph—(DLG) a vector data format created by the USGS.

Digital Terrain Elevation Data—(DTED) data produced by the DMA. DTED data comes in two types, both in Arc/second format: DTED 1—a 1° × 1° area of coverage, and DTED 2—a 1° × 1° or less area of coverage.

digital terrain model—(DTM) a discrete expression of topography in a data array, consisting of a group of planimetric coordinates (X,Y) and the elevations of the ground points and breaklines.

digitized raster graphic—(DRG) a digital replica of DMA hardcopy graphic products. See also ADRG.

digitizing—any process that converts nondigital data into numeric data, usually to be stored on a computer. In ERDAS IMAGINE, digitizing refers to the creation of vector data from hardcopy materials or raster images that are traced using a digitizer keypad on a digitizing tablet, or a mouse on a display device.

DIME—see Dual Independent Map Encoding.

Page 760: ERDAS Field Guide

730 Glossary

dimensionality—a term referring to the number of bands being classified. For example, a data file with three bands is said to be three-dimensional, since three-dimensional spectral space is plotted to analyze the data.

directory—an area of a computer disk that is designated to hold a set of files. Usually, directories are arranged in a tree structure, in which directories can also contain many levels of subdirectories.

Discrete Cosine Transformation—(DCT) an element of a commonly used form of JPEG, which is a compression technique.

Discrete Fourier Transform—(DFT) method of removing striping and other noise from radar images. See also Fast Fourier Transform.

displacement—the degree of geometric distortion for a point that is not on the nadir line.

display device—the computer hardware consisting of a memory board and a monitor. It displays a visible image from a data file or from some user operation.

display driver—the ERDAS IMAGINE utility that interfaces between the computer running ERDAS IMAGINE software and the display device.

display memory—the subset of image memory that is actually viewed on the display screen.

display pixel—one grid location on a display device or printout.

display resolution—the number of pixels that can be viewed on the display device monitor, horizontally and vertically (i.e., 512 × 512 or 1024 × 1024).

distance—see Euclidean distance, spectral distance.

distance image file—a one-band, 16-bit file that can be created in the classification process, in which each data file value represents the result of the distance equation used in the program. Distance image files generally have a chi-square distribution.

distribution—the set of frequencies with which an event occurs, or the set of probabilities that a variable has a particular value.

distribution rectangles—(DR) the geographic data sets into which ADRG data are divided.

dithering—a display technique that is used in ERDAS IMAGINE to allow a smaller set of colors appear to be a larger set of colors.

Page 761: ERDAS Field Guide

Glossary 731

divergence—a statistical measure of distance between two or more signatures. Divergence can be calculated for any combination of bands used in the classification; bands that diminish the results of the classification can be ruled out.

diversity—a neighborhood analysis technique that outputs the number of different values within a user-specified window.

DLG—see Digital Line Graph.

DMA—see Defense Mapping Agency.

DN—see Digital Number.

DOQ—see digital orthophoto quadrangle.

dot patterns—the matrices of dots used to represent brightness values on hardcopy maps and images.

dots per inch—(DPI) when referring to the resolution of an output device, such as a printer, the number of dots that are printed per unit—for example, 300 dots per inch.

double precision—a measure of accuracy in which fifteen significant digits can be stored for a coordinate.

downsampling—the skipping of pixels during the display or processing of the scanning process.

DPI—see dots per inch.

DR—see distribution rectangles.

DTED—see Digital Terrain Elevation Data.

DTM—see digital terrain model.

Dual Independent Map Encoding—(DIME) a type of ETAK feature wherein a line is created along with a corresponding ACODE (arc attribute) record. The coordinates are stored in Lat/Lon decimal degrees. Each record represents a single linear feature.

DXF—Data Exchange Format. A format for storing vector data in ASCII files, used by AutoCAD software.

dynamic range—see radiometric resolution.

E Earth Observation Satellite Company—(EOSAT) a private company that directs the Landsat satellites and distributes Landsat imagery.

Earth Resources Observation Systems—(EROS) a division of the USGS National Mapping Division. EROS is involved with managing data and creating systems, as well as research (USGS, 1999a).

Page 762: ERDAS Field Guide

732 Glossary

Earth Resources Technology Satellites—(ERTS) in 1972, NASA’s first civilian program to acquire remotely sensed digital satellite data, later renamed to Landsat.

EDC—see EROS Data Center.

edge detector—a convolution kernel, which is usually a zero-sum kernel, that smooths out or zeros out areas of low spatial frequency and creates a sharp contrast where spatial frequency is high. High spatial frequency is at the edges between homogeneous groups of pixels.

edge enhancer—a high-frequency convolution kernel that brings out the edges between homogeneous groups of pixels. Unlike an edge detector, it only highlights edges, it does not necessarily eliminate other features.

eigenvalue—the length of a principal component that measures the variance of a principal component band. See also principal components.

eigenvector—the direction of a principal component represented as coefficients in an eigenvector matrix which is computed from the eigenvalues. See also principal components.

electromagnetic—(EM) type of spectrum consisting of different regions such as thermal infrared and long-wave and short-wave reflective.

electromagnetic radiation—(EMR) the energy transmitted through space in the form of electric and magnetic waves.

electromagnetic spectrum—the range of electromagnetic radiation extending from cosmic waves to radio waves, characterized by frequency or wavelength.

element—an entity of vector data, such as a point, line, or polygon.

elevation data—see terrain data, DEM.

ellipse—a two-dimensional figure that is formed in a two-dimensional scatterplot when both bands plotted have normal distributions. The ellipse is defined by the standard deviations of the input bands. Ellipse plots are often used to test signatures before classification.

EM—see electromagnetic.

EML—see ERDAS Macro Language.

EMR—see electromagnetic radiation.

end-of-file mark—(EOF) usually a half-inch strip of blank tape that signifies the end of a file that is stored on magnetic tape.

Page 763: ERDAS Field Guide

Glossary 733

end-of-volume mark—(EOV) usually three EOFs marking the end of a tape.

Enhanced Thematic Mapper Plus—(ETM+) the observing instrument on Landsat 7.

enhancement—the process of making an image more interpretable for a particular application. Enhancement can make important features of raw, remotely sensed data more interpretable to the human eye.

entity—an AutoCAD drawing element that can be placed in an AutoCAD drawing with a single command.

Environmental Systems Research Institute—(ESRI) company based in Redlands, California, which produces software such as ArcInfo and ArcView. ESRI has created many data formats, including GRID and GRID Stack.

EOF—see end-of-file mark.

EOSAT—see Earth Observation Satellite Company.

EOV— see end-of-volume mark.

ephemeris data—contained in the header of the data file of a SPOT scene, provides information about the recording of the data and the satellite orbit.

epipolar stereopair—a stereopair without y-parallax.

equal area—see equivalence.

equatorial aspect—a map projection that is centered around the equator or a point on the equator.

equidistance—the property of a map projection to represent true distances from an identified point.

equivalence—the property of a map projection to represent all areas in true proportion to one another.

ERDAS Macro Language—(EML) computer language that can be used to create custom dialogs in ERDAS IMAGINE, or to edit existing dialogs and functions for your specific application.

EROS—see Earth Resources Observation Systems.

EROS Data Center—(EDC) a division of USGS, located in Sioux Falls, SD, which is the primary receiving center for Landsat 7 data.

error matrix—in classification accuracy assessment, a square matrix showing the number of reference pixels that have the same values as the actual classified points.

Page 764: ERDAS Field Guide

734 Glossary

ERS-1—the European Space Agency’s (ESA) radar satellite launched in July 1991, currently provides the most comprehensive radar data available. ERS-2 was launched in 1995.

ERTS—see Earth Resources Technology Satellites.

ESA—see European Space Agency.

ESRI—see Environmental Systems Research Institute.

ETAK MapBase—an ASCII digital street centerline map product available from ETAK, Inc. (Menlo Park, California).

ETM+—see Enhanced Thematic Mapper Plus.

Euclidean distance—the distance, either in physical or abstract (e.g., spectral) space, that is computed based on the equation of a straight line.

exposure station—during image acquisition, each point in the flight path at which the camera exposes the film.

extend—the process of moving selected dangling lines up a specified distance so that they intersect existing lines.

extension—the three letters after the period in a file name that usually identify the type of file.

extent—1. the image area to be displayed in a Viewer. 2. the area of the Earth’s surface to be mapped.

exterior orientation—all images of a block of aerial photographs in the ground coordinate system are computed during photogrammetric triangulation using a limited number of points with known coordinates. The exterior orientation of an image consists of the exposure station and the camera attitude at this moment.

exterior orientation parameters—the perspective center’s ground coordinates in a specified map projection, and three rotation angles around the coordinate axes.

European Space Agency—(ESA) company with two satellites, ERS-1 and ERS-2, that collect radar data. For more information, visit the ESA web site at http://www.esa.int.

extract—selected bands of a complete set of NOAA AVHRR data.

F false color—a color scheme in which features have expected colors. For instance, vegetation is green, water is blue, etc. These are not necessarily the true colors of these features.

false easting—an offset between the y-origin of a map projection and the y-origin of a map. Typically used so that no y-coordinates are negative.

Page 765: ERDAS Field Guide

Glossary 735

false northing—an offset between the x-origin of a map projection and the x-origin of a map. Typically used so that no x-coordinates are negative.

fast format—a type of BSQ format used by EOSAT to store Landsat TM data.

Fast Fourier Transform—(FFT) a type of Fourier Transform faster than the DFT. Designed to remove noise and periodic features from radar images. It converts a raster image from the spatial domain into a frequency domain image.

feature based matching—an image matching technique that determines the correspondence between two image features.

feature collection—the process of identifying, delineating, and labeling various types of natural and human-made phenomena from remotely-sensed images.

feature extraction—the process of studying and locating areas and objects on the ground and deriving useful information from images.

feature space—an abstract space that is defined by spectral units (such as an amount of electromagnetic radiation).

feature space area of interest—a user-selected area of interest (AOI) that is selected from a feature space image.

feature space image—a graph of the data file values of one band of data against the values of another band (often called a scatterplot).

FFT—see Fast Fourier Transform.

fiducial center—the center of an aerial photo.

fiducials—four or eight reference markers fixed on the frame of an aerial metric camera and visible in each exposure. Fiducials are used to compute the transformation from data file to image coordinates.

field—in an attribute database, a category of information about each class or feature, such as Class name and Histogram.

field of view—(FOV) in perspective views, an angle that defines how far the view is generated to each side of the line of sight.

file coordinates—the location of a pixel within the file in x,y coordinates. The upper left file coordinate is usually 0,0.

file pixel—the data file value for one data unit in an image file.

Page 766: ERDAS Field Guide

736 Glossary

file specification or filespec—the complete file name, including the drive and path, if necessary. If a drive or path is not specified, the file is assumed to be in the current drive and directory.

filled—referring to polygons; a filled polygon is solid or has a pattern, but is not transparent. An unfilled polygon is simply a closed vector that outlines the area of the polygon.

filtering—the removal of spatial or spectral features for data enhancement. Convolution filtering is one method of spatial filtering. Some texts may use the terms filtering and spatial filtering synonymously.

flip—the process of reversing the from-to direction of selected lines or links.

focal length—the orthogonal distance from the perspective center to the image plane.

focal operations—filters that use a moving window to calculate new values for each pixel in the image based on the values of the surrounding pixels.

focal plane—the plane of the film or scanner used in obtaining an aerial photo.

Fourier analysis—an image enhancement technique that was derived from signal processing.

FOV—see field of view.

from-node—the first vertex in a line.

full set—all bands of a NOAA AVHRR data set.

function memories—areas of the display device memory that store the lookup tables, which translate image memory values into brightness values.

function symbol—an annotation symbol that represents an activity. For example, on a map of a state park, a symbol of a tent would indicate the location of a camping area.

Fuyo 1 (JERS-1)—the Japanese radar satellite launched in February 1992.

G GAC—see global area coverage.

GBF—see Geographic Base File.

GCP—see ground control point.

Page 767: ERDAS Field Guide

Glossary 737

GCP matching—for image-to-image rectification, a GCP selected in one image is precisely matched to its counterpart in the other image using the spectral characteristics of the data and the transformation matrix.

GCP prediction—the process of picking a GCP in either coordinate system and automatically locating that point in the other coordinate system based on the current transformation parameters.

generalize—the process of weeding vertices from selected lines using a specified tolerance.

geocentric coordinate system—a coordinate system that has its origin at the center of the Earth ellipsoid. The ZG-axis equals the rotational axis of the Earth, and the XG-axis passes through the Greenwich meridian. The YG-axis is perpendicular to both the ZG-axis and the XG-axis, so as to create a three-dimensional coordinate system that follows the right-hand rule.

geocoded data—an image(s) that has been rectified to a particular map projection and cell size and has had radiometric corrections applied.

Geographic Base File—(GBF) along with DIME, sometimes provides the cartographic base for TIGER/Line files, which cover the US, Puerto Rico, Guam, the Virgin Islands, American Samoa, and the Trust Territories of the Pacific.

geographic information system—(GIS) a unique system designed for a particular application that stores, enhances, combines, and analyzes layers of geographic data to produce interpretable information. A GIS may include computer images, hardcopy maps, statistical data, and any other data needed for a study, as well as computer software and human knowledge. GISs are used for solving complex geographic planning and management problems.

geographical coordinates—a coordinate system for explaining the surface of the Earth. Geographical coordinates are defined by latitude and by longitude (Lat/Lon), with respect to an origin located at the intersection of the equator and the prime (Greenwich) meridian.

geometric correction—the correction of errors of skew, rotation, and perspective in raw, remotely sensed data.

georeferencing—the process of assigning map coordinates to image data and resampling the pixels of the image to conform to the map projection grid.

GeoTIFF— TIFF files that are geocoded.

Page 768: ERDAS Field Guide

738 Glossary

gigabyte—(Gb) about one billion bytes.

GIS—see geographic information system.

GIS file—a single-band ERDAS Ver. 7.X data file in which pixels are divided into discrete categories.

global area coverage—(GAC) a type of NOAA AVHRR data with a spatial resolution of 4 × 4 km.

global operations—functions that calculate a single value for an entire area, rather than for each pixel like focal functions.

GLObal NAvigation Satellite System—(GLONASS) a satellite-based navigation system produced by the Russian Space Forces. It provides three-dimensional locations, velocity, and time measurements for both civilian and military applications. GLONASS started its mission in 1993 (Magellan Corporation, 1999).

Global Ozone Monitoring Experiment—(GOME) instrument aboard ESA’s ERS-2 satellite, which studies atmospheric chemistry (European Space Agency, 1995).

Global Positioning System—(GPS) system used for the collection of GCPs, which uses orbiting satellites to pinpoint precise locations on the Earth’s surface.

GLONASS—see GLObal NAvigation Satellite System.

.gmd file—the ERDAS IMAGINE graphical model file created with Model Maker (Spatial Modeler).

gnomonic—an azimuthal projection obtained from a perspective at the center of the Earth.

GOME—see Global Ozone Monitoring Experiment.

GPS—see Global Positioning System.

graphical modeling—a technique used to combine data layers in an unlimited number of ways using icons to represent input data, functions, and output data. For example, an output layer created from modeling can represent the desired combination of themes from many input layers.

graphical model—a model created with Model Maker (Spatial Modeler). Graphical models are put together like flow charts and are stored in .gmd files.

Graphical User Interface—(GUI) the dialogs and menus of ERDAS IMAGINE that enable you to execute commands to analyze your imagery.

Page 769: ERDAS Field Guide

Glossary 739

graticule—the network of parallels of latitude and meridians of longitude applied to the global surface and projected onto maps.

gray scale—a color scheme with a gradation of gray tones ranging from black to white.

great circle—an arc of a circle for which the center is the center of the Earth. A great circle is the shortest possible surface route between two points on the Earth.

GRID—a compressed tiled raster data structure that is stored as a set of files in a directory, including files to keep the attributes of the GRID.

grid cell—a pixel.

grid lines—intersecting lines that indicate regular intervals of distance based on a coordinate system. Sometimes called a graticule.

GRID Stack—multiple GRIDs to be treated as a multilayer image.

ground control point—(GCP) specific pixel in image data for which the output map coordinates (or other output coordinates) are known. GCPs are used for computing a transformation matrix, for use in rectifying an image.

ground coordinate system—a three-dimensional coordinate system which utilizes a known map projection. Ground coordinates (X,Y,Z) are usually expressed in feet or meters.

ground truth—data that are taken from the actual area being studied.

ground truthing—the acquisition of knowledge about the study area from field work, analysis of aerial photography, personal experience, etc. Ground truth data are considered to be the most accurate (true) data available about the area of study.

GUI—see Graphical User Interface.

H halftoning—the process of using dots of varying size or arrangements (rather than varying intensity) to form varying degrees of a color.

hardcopy output—any output of digital computer (softcopy) data to paper.

HARN—see High Accuracy Reference Network.

header file—a file usually found before the actual image data on tapes or CD-ROMs that contains information about the data, such as number of bands, upper left coordinates, map projection, etc.

Page 770: ERDAS Field Guide

740 Glossary

header record—the first part of an image file that contains general information about the data in the file, such as the number of columns and rows, number of bands, database coordinates of the upper left corner, and the pixel depth. The contents of header records vary depending on the type of data.

HFA—see Hierarchal File Architecture System.

Hierarchal File Architecture System—(HFA) a format that allows different types of information about a file to be stored in a tree-structured fashion. The tree is made of nodes that contain information such as ephemeris data.

High Accuracy Reference Network—(HARN) HARN is based on the GRS 1980 spheroid, and can be used to perform State Plane calculations.

high-frequency kernel—a convolution kernel that increases the spatial frequency of an image. Also called high-pass kernel.

High Resolution Picture Transmission—(HRPT) the direct transmission of AVHRR data in real-time with the same resolution as LAC.

High Resolution Visible Infrared—(HR VIR) a pushbroom scanner on the SPOT 4 satellite, which captures information in the visible and near-infrared bands (SPOT Image, 1999).

High Resolution Visible sensor—(HRV) a pushbroom scanner on a SPOT satellite that takes a sequence of line images while the satellite circles the Earth.

histogram—a graph of data distribution, or a chart of the number of pixels that have each possible data file value. For a single band of data, the horizontal axis of a histogram graph is the range of all possible data file values. The vertical axis is a measure of pixels that have each data value.

histogram equalization—the process of redistributing pixel values so that there are approximately the same number of pixels with each value within a range. The result is a nearly flat histogram.

histogram matching—the process of determining a lookup table that converts the histogram of one band of an image or one color gun to resemble another histogram.

horizontal control—the horizontal distribution of GCPs in aerial triangulation (x,y - planimetry).

host workstation—a CPU, keyboard, mouse, and a display.

HRPT—see High Resolution Picture Transmission.

HRV—see High Resolution Visible sensor.

Page 771: ERDAS Field Guide

Glossary 741

HR VIR—see High Resolution Visible Infrared.

hue—a component of IHS (intensity, hue, saturation) that is representative of the color or dominant wavelength of the pixel. It varies from 0 to 360. Blue = 0 (and 360), magenta = 60, red = 120, yellow = 180, green = 240, and cyan = 300.

hyperspectral sensors—the imaging sensors that record multiple bands of data, such as the AVIRIS with 224 bands.

I IARR—see Internal Average Relative Reflectance.

IFFT—see Inverse Fast Fourier Transform.

IFOV—see instantaneous field of view.

IGES—see Initial Graphics Exchange Standard files.

IHS—intensity, hue, saturation. An alternate color space from RGB (red, green, blue). This system is advantageous in that it presents colors more nearly as perceived by the human eye. See intensity, hue, and saturation.

image—a picture or representation of an object or scene on paper, or a display screen. Remotely sensed images are digital representations of the Earth.

image algebra—any type of algebraic function that is applied to the data file values in one or more bands.

image center—the center of the aerial photo or satellite scene.

image coordinate system—the location of each point in the image is expressed for purposes of photogrammetric triangulation.

image data—digital representations of the Earth that can be used in computer image processing and GIS analyses.

image file—a file containing raster image data. Image files in ERDAS IMAGINE have the extension .img. Image files from the ERDAS Ver. 7.X series software have the extension .LAN or .GIS.

image matching—the automatic acquisition of corresponding image points on the overlapping area of two images.

image memory—the portion of the display device memory that stores data file values (which may be transformed or processed by the software that accesses the display device).

image pair—see stereopair.

image processing—the manipulation of digital image data, including (but not limited to) enhancement, classification, and rectification operations.

Page 772: ERDAS Field Guide

742 Glossary

image pyramid—a data structure consisting of the same image represented several times, at a decreasing spatial resolution each time. Each level of the pyramid contains the image at a particular resolution.

image scale—(SI) expresses the average ratio between a distance in the image and the same distance on the ground. It is computed as focal length divided by the flying height above the mean ground elevation.

image space coordinate system—identical to image coordinates, except that it adds a third axis (z) that is used to describe positions inside the camera. The units are usually in millimeters or microns.

IMC—see International Map Committee.

.img file—(also, image file) an ERDAS IMAGINE file that stores continuous or thematic raster layers.

IMW—see International Map of the World.

inclination—the angle between a vertical on the ground at the center of the scene and a light ray from the exposure station, which defines the degree of off-nadir viewing when the scene was recorded.

indexing—a function applied to thematic layers that adds the data file values of two or more layers together, creating a new output layer. Weighting factors can be applied to one or more layers to add more importance to those layers in the final sum.

index map—a reference map that outlines the mapped area, identifies all of the component maps for the area if several map sheets are required, and identifies all adjacent map sheets.

Indian Remote Sensing Satellite—(IRS) satellites operated by Space Imaging, including IRS-1A, IRS-1B, IRS-1C, and IRS-1D.

indices—the process used to create output images by mathematically combining the DN values of different bands.

information—something that is independently meaningful, as opposed to data, which are not independently meaningful.

Initial Graphics Exchange Standard files—(IGES) files often used to transfer CAD data between systems. IGES Version 3.0 format, published by the U.S. Department of Commerce, is in uncompressed ASCII format only.

Page 773: ERDAS Field Guide

Glossary 743

initialization—a process that ensures all values in a file or in computer memory are equal until additional information is added or processed to overwrite these values. Usually the initialization value is 0. If initialization is not performed on a data file, there could be random data values in the file.

inset map—a map that is an enlargement of some congested area of a smaller scale map, and that is usually placed on the same sheet with the smaller scale main map.

instantaneous field of view—(IFOV) a measure of the area viewed by a single detector on a scanning system in a given instant in time.

intensity—a component of IHS (intensity, hue, saturation), which is the overall brightness of the scene and varies from 0 (black) to 1 (white).

interferometry—method of subtracting the phase of one SAR image from another to derive height information.

interior orientation—defines the geometry of the sensor that captured a particular image.

Internal Average Relative Reflectance—(IARR) a technique designed to compensate for atmospheric contamination of the spectra.

International Map Committee—(IMC) located in London, the committee responsible for creating the International Map of the World series.

International Map of the World—(IMW) a series of maps produced by the International Map Committee. Maps are in 1:1,000,000 scale.

intersection—the area or set that is common to two or more input areas or sets.

interval data—a type of data in which thematic class values have a natural sequence, and in which the distances between values are meaningful.

Inverse Fast Fourier Transform—(IFFT) used after the Fast Fourier Transform to transform a Fourier image back into the spatial domain. See also Fast Fourier Transform.

IR—infrared portion of the electromagnetic spectrum. See also electromagnetic spectrum.

IRS—see Indian Remote Sensing Satellite.

isarithmic map—a map that uses isorithms (lines connecting points of the same value for any of the characteristics used in the representation of surfaces) to represent a statistical surface. Also called an isometric map.

Page 774: ERDAS Field Guide

744 Glossary

ISODATA clustering—see Iterative Self-Organizing Data Analysis Technique.

island—A single line that connects with itself.

isopleth map—a map on which isopleths (lines representing quantities that cannot exist at a point, such as population density) are used to represent some selected quantity.

iterative—a term used to describe a process in which some operation is performed repeatedly.

Iterative Self-Organizing Data Analysis Technique—(ISODATA clustering) a method of clustering that uses spectral distance as in the sequential method, but iteratively classifies the pixels, redefines the criteria for each class, and classifies again, so that the spectral distance patterns in the data gradually emerge.

J JERS-1 (Fuyo 1)—the Japanese radar satellite launched in February 1992.

Jet Propulsion Laboratories—(JPL) “the lead U.S. center for robotic exploration of the solar system.” JPL is managed for NASA by the California Institute of Technology. For more information, see the JPL web site at http://www.jpl.nasa.gov (National Aeronautics and Space Administration, 1999).

JFIF—see JPEG File Interchange Format.

join—the process of interactively entering the side lot lines when the front and rear lines have already been established.

Joint Photographic Experts Group—(JPEG) 1. a group responsible for creating a set of compression techniques. 2. Compression techniques are also called JPEG.

JPEG—see Joint Photographic Experts Group.

JPEG File Interchange Format—(JFIF) standard file format used to store JPEG-compressed imagery.

JPL—see Jet Propulsion Laboratories.

K Kappa coefficient—a number that expresses the proportionate reduction in error generated by a classification process compared with the error of a completely random classification.

kernel—see convolution kernel.

Page 775: ERDAS Field Guide

Glossary 745

L label—in annotation, the text that conveys important information to the reader about map features.

label point—a point within a polygon that defines that polygon.

LAC—see local area coverage.

.LAN files—multiband ERDAS Ver. 7.X image files (the name originally derived from the Landsat satellite). LAN files usually contain raw or enhanced remotely sensed data.

land cover map—a map of the visible ground features of a scene, such as vegetation, bare land, pasture, urban areas, etc.

Landsat—a series of Earth-orbiting satellites that gather MSS and TM imagery, operated by EOSAT.

large-scale—a description used to represent a map or data file having a large ratio between the area on the map (such as inches or pixels), and the area that is represented (such as feet). In large-scale image data, each pixel represents a small area on the ground, such as SPOT data, with a spatial resolution of 10 or 20 meters.

Lat/Lon—Latitude/Longitude, a map coordinate system.

layer—1. a band or channel of data. 2. a single band or set of three bands displayed using the red, green, and blue color guns of the ERDAS IMAGINE Viewer. A layer could be a remotely sensed image, an aerial photograph, an annotation layer, a vector layer, an area of interest layer, etc. 3. a component of a GIS data base that contains all of the data for one theme. A layer consists of a thematic image file, and may also include attributes.

least squares correlation—uses the least squares estimation to derive parameters that best fit a search window to a reference window.

least squares regression—the method used to calculate the transformation matrix from the GCPs. This method is discussed in statistics textbooks.

legend—the reference that lists the colors, symbols, line patterns, shadings, and other annotation that is used on a map, and their meanings. The legend often includes the map’s title, scale, origin, and other information.

lettering—the manner in which place names and other labels are added to a map, including letter spacing, orientation, and position.

level 1A (SPOT)—an image that corresponds to raw sensor data to which only radiometric corrections have been applied.

Page 776: ERDAS Field Guide

746 Glossary

level 1B (SPOT)—an image that has been corrected for the Earth’s rotation and to make all pixels 10 × 10 on the ground. Pixels are resampled from the level 1A sensor data by cubic polynomials.

level slice—the process of applying a color scheme by equally dividing the input values (image memory values) into a certain number of bins, and applying the same color to all pixels in each bin. Usually, a ROYGBIV (red, orange, yellow, green, blue, indigo, violet) color scheme is used.

line—1. a vector data element consisting of a line (the set of pixels directly between two points), or an unclosed set of lines. 2. a row of pixels in a data file.

line dropout—a data error that occurs when a detector in a satellite either completely fails to function or becomes temporarily overloaded during a scan. The result is a line, or partial line of data with incorrect data file values creating a horizontal streak until the detector(s) recovers, if it recovers.

linear—a description of a function that can be graphed as a straight line or a series of lines. Linear equations (transformations) can generally be expressed in the form of the equation of a line or plane. Also called 1st-order.

linear contrast stretch—an enhancement technique that outputs new values at regular intervals.

linear transformation—a 1st-order rectification. A linear transformation can change location in X and/or Y, scale in X and/or Y, skew in X and/or Y, and rotation.

line of sight—in perspective views, the point(s) and direction from which the viewer is looking into the image.

local area coverage—(LAC) a type of NOAA AVHRR data with a spatial resolution of 1.1 × 1.1 km.

logical record—a series of bytes that form a unit on a 9-track tape. For example, all the data for one line of an image may form a logical record. One or more logical records make up a physical record on a tape.

long wave infrared region—(LWIR) the thermal or far-infrared region of the electromagnetic spectrum.

lookup table—(LUT) an ordered set of numbers that is used to perform a function on a set of input values. To display or print an image, lookup tables translate data file values into brightness values.

Page 777: ERDAS Field Guide

Glossary 747

lossy—”a term describing a data compression algorithm which actually reduces the amount of information in the data, rather than just the number of bits used to represent that information” (Free On-Line Dictionary of Computing, 1999c).

low-frequency kernel—a convolution kernel that decreases spatial frequency. Also called low-pass kernel.

LUT—see lookup table.

LWIR—see long wave infrared region.

M Machine Independent Format—(MIF) a format designed to store data in a way that it can be read by a number of different machines.

magnify—the process of displaying one file pixel over a block of display pixels. For example, if the magnification factor is 3, then each file pixel takes up a block of 3 × 3 display pixels. Magnification differs from zooming in that the magnified image is loaded directly to image memory.

magnitude—an element of an electromagnetic wave. Magnitude of a wave decreases exponentially as the distance from the transmitter increases.

Mahalanobis distance—a classification decision rule that is similar to the minimum distance decision rule, except that a covariance matrix is used in the equation.

majority—a neighborhood analysis technique that outputs the most common value of the data file values in a user-specified window.

MAP—see Maximum A Posteriori.

map—a graphic representation of spatial relationships on the Earth or other planets.

map coordinates—a system of expressing locations on the Earth’s surface using a particular map projection, such as UTM, State Plane, or Polyconic.

map frame—an annotation element that indicates where an image is placed in a map composition.

map projection—a method of representing the three-dimensional spherical surface of a planet on a two-dimensional map surface. All map projections involve the transfer of latitude and longitude onto an easily flattened surface.

matrix—a set of numbers arranged in a rectangular array. If a matrix has i rows and j columns, it is said to be an i × j matrix.

Page 778: ERDAS Field Guide

748 Glossary

matrix analysis—a method of combining two thematic layers in which the output layer contains a separate class for every combination of two input classes.

matrix object—in Model Maker (Spatial Modeler), a set of numbers in a two-dimensional array.

maximum—a neighborhood analysis technique that outputs the greatest value of the data file values in a user-specified window.

Maximum A Posteriori—(MAP) a filter (Gamma-MAP) that is designed to estimate the original DN value of a pixel, which it assumes is between the local average and the degraded DN.

maximum likelihood—a classification decision rule based on the probability that a pixel belongs to a particular class. The basic equation assumes that these probabilities are equal for all classes, and that the input bands have normal distributions.

.mdl file—an ERDAS IMAGINE script model created with the Spatial Modeler Language.

mean—1. the statistical average; the sum of a set of values divided by the number of values in the set. 2. a neighborhood analysis technique that outputs the mean value of the data file values in a user-specified window.

mean vector—an ordered set of means for a set of variables (bands). For a data file, the mean vector is the set of means for all bands in the file.

measurement vector—the set of data file values for one pixel in all bands of a data file.

median—1. the central value in a set of data such that an equal number of values are greater than and less than the median. 2. a neighborhood analysis technique that outputs the median value of the data file values in a user-specified window.

megabyte—(Mb) about one million bytes.

memory resident—a term referring to the occupation of a part of a computer’s RAM (random access memory), so that a program is available for use without being loaded into memory from disk.

mensuration—the measurement of linear or areal distance.

meridian—a line of longitude, going north and south. See geographical coordinates.

MIF—see Machine Independent Format.

minimum—a neighborhood analysis technique that outputs the least value of the data file values in a user-specified window.

Page 779: ERDAS Field Guide

Glossary 749

minimum distance—a classification decision rule that calculates the spectral distance between the measurement vector for each candidate pixel and the mean vector for each signature. Also called spectral distance.

minority—a neighborhood analysis technique that outputs the least common value of the data file values in a user-specified window.

mode—the most commonly-occurring value in a set of data. In a histogram, the mode is the peak of the curve.

model—in a GIS, the set of expressions, or steps, that defines your criteria and creates an output layer.

modeling—the process of creating new layers from combining or operating upon existing layers. Modeling allows the creation of new classes from existing classes and the creation of a small set of images—perhaps even a single image—which, at a glance, contains many types of information about a scene.

modified projection—a map projection that is a modified version of another projection. For example, the Space Oblique Mercator projection is a modification of the Mercator projection.

monochrome image—an image produced from one band or layer, or contained in one color gun of the display device.

morphometric map—a map representing morphological features of the Earth’s surface.

mosaicking—the process of piecing together images side by side, to create a larger image.

MrSID—see Multiresolution Seamless Image Database.

MSS—see multispectral scanner.

Multiresolution Seamless Image Database—(MrSID) a wavelet transform-based compression algorithm designed by LizardTech, Inc.

multispectral classification—the process of sorting pixels into a finite number of individual classes, or categories of data, based on data file values in multiple bands. See also classification.

multispectral imagery—satellite imagery with data recorded in two or more bands.

multispectral scanner—(MSS) Landsat satellite data acquired in four bands with a spatial resolution of 57 × 79 meters.

multitemporal—data from two or more different dates.

Page 780: ERDAS Field Guide

750 Glossary

N NAD27—see North America Datum 1927.

NAD83—see North America Datum 1983.

nadir—the area on the ground directly beneath a scanner’s detectors.

nadir line—the average of the left and right edge lines of a Landsat image.

nadir point—the center of the nadir line in vertically viewed imagery.

NASA—see National Aeronautics and Space Administration.

National Aeronautics and Space Administration—(NASA) an organization that studies outer space. For more information, visit the NASA web site at http://www.nasa.gov.

National Geospatial-Intelligence Agency—(NGA) formerly NIMA. In 2003, NIMA became the National Geospatial-Intelligence Agency (NGA). NGA is a U.S. Department of Defense combat support agency and a member of the national Intelligence Community (IC), developing imagery and map-based geospatial intelligence (GEOINT) solutions. For more information, visit the NGA website at www.nga.mil.

National Imagery and Mapping Agency—(NIMA) formerly DMA. The agency was formed in October of 1996. NIMA supplies current imagery and geospatial data.

National Imagery Transmission Format Standard—(NITFS) a format designed to package imagery with complete annotation, text attachments, and imagery-associated metadata (Jordan and Beck, 1999).

National Ocean Service—(NOS) the organization that created a zone numbering system for the State Plane coordinate system. A division of NOAA. For more information, visit the NOS web site at http://www.nos.noaa.gov.

National Oceanic and Atmospheric Administration—(NOAA) an organization that studies weather, water bodies, and encourages conservation. For more information, visit the NOAA web site at http://www.noaa.gov.

Navigation System with Time and Ranging—(NAVSTAR) satellite launched in 1978 for collection of GPS data.

NAVSTAR—see Navigation System with Time and Ranging.

NDVI—see Normalized Difference Vegetation Index.

nearest neighbor—a resampling method in which the output data file value is equal to the input pixel that has coordinates closest to the retransformed coordinates of the output pixel.

Page 781: ERDAS Field Guide

Glossary 751

neatline—a rectangular border printed around a map. On scaled maps, neatlines usually have tick marks that indicate intervals of map coordinates or distance.

negative inclination—the sensors are tilted in increments of 0.6° to a maximum of 27° to the east.

neighborhood analysis—any image processing technique that takes surrounding pixels into consideration, such as convolution filtering and scanning.

NIMA—see National Imagery and Mapping Agency.

9-track—CCTs that hold digital data.

NITFS—see National Imagery Transmission Format Standard.

NOAA—see National Oceanic and Atmospheric Administration.

node—the ending points of a line. See from-node and to-node.

nominal data—a type of data in which classes have no inherent order, and therefore are qualitative.

nonlinear—describing a function that cannot be expressed as the graph of a line or in the form of the equation of a line or plane. Nonlinear equations usually contain expressions with exponents. Second-order (2nd-order) or higher-order equations and transformations are nonlinear.

nonlinear transformation—a 2nd-order or higher rectification.

nonparametric signature—a signature for classification that is based on polygons or rectangles that are defined in the feature space image for the image file. There is no statistical basis for a nonparametric signature; it is simply an area in a feature space image.

normal—the state of having a normal distribution.

normal distribution—a symmetrical data distribution that can be expressed in terms of the mean and standard deviation of the data. The normal distribution is the most widely encountered model for probability, and is characterized by the bell curve. Also called Gaussian distribution.

normalize—a process that makes an image appear as if it were a flat surface. This technique is used to reduce topographic effect.

Normalized Difference Vegetation Index—(NDVI) the formula for NDVI is IR-R/IR+R, where IR stands for the infrared portion of the electromagnetic spectrum, and R stands for the red portion of the electromagnetic spectrum. NDVI finds areas of vegetation in imagery.

Page 782: ERDAS Field Guide

752 Glossary

North America Datum 1927—(NAD27) a datum created in 1927 that is based on the Clarke 1866 spheroid. Commonly used in conjunction with the State Plane coordinate system.

North America Datum 1983—(NAD83) a datum created in 1983 that is based on the GRS 1980 spheroid. Commonly used in conjunction with the State Plane coordinate system.

NOS—see National Ocean Service.

NPO Mashinostroenia—a company based in Russia that develops satellites, such as Almaz 1-B, for GIS application.

number maps—maps that output actual data file values or brightness values, allowing the analysis of the values of every pixel in a file or on the display screen.

numeric keypad—the set of numeric and/or mathematical operator keys (+, -, etc.) that is usually on the right side of the keyboard.

Nyquist—in image registration, the original continuous function can be reconstructed from the sampled data, and phase function can be reconstructed to much higher resolution.

O object—in models, an input to or output from a function. See matrix object, raster object, scalar object, table object.

oblique aspect—a map projection that is not oriented around a pole or the Equator.

observation—in photogrammetric triangulation, a grouping of the image coordinates for a GCP.

off-nadir—any point that is not directly beneath a scanner’s detectors, but off to an angle. The SPOT scanner allows off-nadir viewing.

1:24,000—1:24,000 scale data, also called 7.5-minute DEM, available from USGS. It is usually referenced to the UTM coordinate system and has a spatial resolution of 30 × 30 meters.

1:250,000—1:250,000 scale DEM data available from USGS. Available only in arc/second format.

opacity—a measure of how opaque, or solid, a color is displayed in a raster layer.

operating system—(OS) the most basic means of communicating with the computer. It manages the storage of information in files and directories, input from devices such as the keyboard and mouse, and output to devices such as the monitor.

orbit—a circular, north-south and south-north path that a satellite travels above the Earth.

Page 783: ERDAS Field Guide

Glossary 753

order—the complexity of a function, polynomial expression, or curve. In a polynomial expression, the order is simply the highest exponent used in the polynomial. See also linear, nonlinear.

ordinal data—a type of data that includes discrete lists of classes with an inherent order, such as classes of streams—first order, second order, third order, etc.

orientation angle—the angle between a perpendicular to the center scan line and the North direction in a satellite scene.

orthographic—an azimuthal projection with an infinite perspective.

orthocorrection—see orthorectification.

orthoimage—see digital orthophoto.

orthomap—an image map product produced from orthoimages, or orthoimage mosaics, that is similar to a standard map in that it usually includes additional information, such as map coordinate grids, scale bars, north arrows, and other marginalia.

orthorectification—a form of rectification that corrects for terrain displacement and can be used if a DEM of the study area is available.

OS—see operating system.

outline map—a map showing the limits of a specific set of mapping entities such as counties. Outline maps usually contain a very small number of details over the desired boundaries with their descriptive codes.

overlay—1. a function that creates a composite file containing either the minimum or the maximum class values of the input files. Overlay sometimes refers generically to a combination of layers. 2. the process of displaying a classified file over the original image to inspect the classification.

overlay file—an ERDAS IMAGINE annotation file (.ovr extension).

.ovr file—an ERDAS IMAGINE annotation file.

P pack—to store data in a way that conserves tape or disk space.

panchromatic imagery—single-band or monochrome satellite imagery.

paneled map—a map designed to be spliced together into a large paper map. Therefore, neatlines and tick marks appear on the outer edges of the large map.

Page 784: ERDAS Field Guide

754 Glossary

pairwise mode—an operation mode in rectification that allows the registration of one image to an image in another Viewer, a map on a digitizing tablet, or coordinates entered at the keyboard.

parallax—displacement of a GCP appearing in a stereopair as a function of the position of the sensors at the time of image capture. You can adjust parallax in both the X and the Y direction so that the image point in both images appears in the same image space.

parallel—a line of latitude, going east and west.

parallelepiped—1. a classification decision rule in which the data file values of the candidate pixel are compared to upper and lower limits. 2. the limits of a parallelepiped classification, especially when graphed as rectangles.

parameter—1. any variable that determines the outcome of a function or operation. 2. the mean and standard deviation of data, which are sufficient to describe a normal curve.

parametric signature—a signature that is based on statistical parameters (e.g., mean and covariance matrix) of the pixels that are in the training sample or cluster.

passive sensors—solar imaging sensors that can only receive radiation waves and cannot transmit radiation.

path—the drive, directories, and subdirectories that specify the location of a file.

pattern recognition—the science and art of finding meaningful patterns in data, which can be extracted through classification.

PC—see principal components.

PCA—see principal components analysis.

perspective center—1. a point in the image coordinate system defined by the x and y coordinates of the principal point and the focal length of the sensor. 2. after triangulation, a point in the ground coordinate system that defines the sensor’s position relative to the ground.

perspective projection—the projection of points by straight lines from a given perspective point to an intersection with the plane of projection.

phase—an element of an electromagnetic wave.

phase flattening— In IMAGINE InSAR, removes the phase function that would result if the imaging area was flat from the actual phase function recorded in the interferogram.

Page 785: ERDAS Field Guide

Glossary 755

phase unwrapping—In IMAGINE InSAR, the process of taking a wrapped phase function and reconstructing the continuous function from it.

photogrammetric quality scanners—special devices capable of high image quality and excellent positional accuracy. Use of this type of scanner results in geometric accuracies similar to traditional analog and analytical photogrammetric instruments.

photogrammetry—the "art, science and technology of obtaining reliable information about physical objects and the environment through the process of recording, measuring and interpreting photographic images and patterns of electromagnetic radiant imagery and other phenomena" (American Society of Photogrammetry, 1980).

physical record—a consecutive series of bytes on a 9-track tape followed by a gap, or blank space, on the tape.

piecewise linear contrast stretch—a spectral enhancement technique used to enhance a specific portion of data by dividing the lookup table into three sections: low, middle, and high.

pixel—abbreviated from picture element; the smallest part of a picture (image).

pixel coordinate system—a coordinate system with its origin in the upper left corner of the image, the x-axis pointing to the right, the y-axis pointing downward, and the units in pixels.

pixel depth—the number of bits required to store all of the data file values in a file. For example, data with a pixel depth of 8, or 8-bit data, have 256 values (28 = 256), ranging from 0 to 255.

pixel size—the physical dimension of a single light-sensitive element (13 × 13 microns).

planar coordinates—coordinates that are defined by a column and row position on a grid (x,y).

planar projection—see azimuthal projection.

plane table photogrammetry—prior to the invention of the airplane, photographs taken on the ground were used to extract the geometric relationships between objects using the principles of Descriptive Geometry.

planimetric map—a map that correctly represents horizontal distances between objects.

plan symbol—an annotation symbol that is formed after the basic outline of the object it represents. For example, the symbol for a house might be a square, since most houses are rectangular.

Page 786: ERDAS Field Guide

756 Glossary

point—1. an element consisting of a single (x,y) coordinate pair. Also called grid cell. 2. a vertex of an element. Also called a node.

point ID—in rectification, a name given to GCPs in separate files that represent the same geographic location.

point mode—a digitizing mode in which one vertex is generated each time a keypad button is pressed.

polar aspect—a map projection that is centered around a pole.

polarization—the direction of the electric field component with the understanding that the magnetic field is perpendicular to it.

polygon—a set of closed line segments defining an area.

polynomial—a mathematical expression consisting of variables and coefficients. A coefficient is a constant that is multiplied by a variable in the expression.

positive inclination—the sensors are tilted in increments of 0.6° to a maximum of 27° to the west.

primary colors—colors from which all other available colors are derived. On a display monitor, the primary colors red, green, and blue are combined to produce all other colors. On a color printer, cyan, yellow, and magenta inks are combined.

principal components—(PC) the transects of a scatterplot of two or more bands of data that represent the widest variance and successively smaller amounts of variance that are not already represented. Principal components are orthogonal (perpendicular) to one another. In principal components analysis, the data are transformed so that the principal components become the axes of the scatterplot of the output data.

principal components analysis—(PCA) a method of data compression that allows redundant data to be compressed into fewer bands (Jensen, 1996; Faust, 1989).

principal component band—a band of data that is output by principal components analysis. Principal component bands are uncorrelated and nonredundant, since each principal component describes different variance within the original data.

principal components analysis—the process of calculating principal components and outputting principal component bands. It allows redundant data to be compacted into fewer bands (i.e., the dimensionality of the data is reduced).

principal point (Xp, Yp)—the point in the image plane onto which the perspective center is projected, located directly beneath the interior orientation.

Page 787: ERDAS Field Guide

Glossary 757

printer—a device that prints text, full color imagery, and/or graphics. See color printer, text printer.

profile—a row of data file values from a DEM or DTED file. The profiles of DEM and DTED run south to north (i.e., the first pixel of the record is the southernmost pixel).

profile symbol—an annotation symbol that is formed like the profile of an object. Profile symbols generally represent vertical objects such as trees, windmills, oil wells, etc.

proximity analysis—a technique used to determine which pixels of a thematic layer are located at specified distances from pixels in a class or classes. A new layer is created that is classified by the distance of each pixel from specified classes of the input layer.

pseudo color—a method of displaying an image (usually a thematic layer) that allows the classes to have distinct colors. The class values of the single band file are translated through all three function memories that store a color scheme for the image.

pseudo node—a single line that connects with itself (an island), or where only two lines intersect.

pseudo projection—a map projection that has only some of the characteristics of another projection.

pushbroom—a scanner in which all scanning parts are fixed, and scanning is accomplished by the forward motion of the scanner, such as the SPOT scanner.

pyramid layers—image layers which are successively reduced by the power of 2 and resampled. Pyramid layers enable large images to display faster.

Q quadrangle—1. any of the hardcopy maps distributed by USGS such as the 7.5-minute quadrangle or the 15-minute quadrangle. 2. one quarter of a full Landsat TM scene. Commonly called a quad.

qualitative map—a map that shows the spatial distribution or location of a kind of nominal data. For example, a map showing corn fields in the US would be a qualitative map. It would not show how much corn is produced in each location, or production relative to other areas.

quantitative map—a map that displays the spatial aspects of numerical data. A map showing corn production (volume) in each area would be a quantitative map.

Page 788: ERDAS Field Guide

758 Glossary

R radar data—the remotely sensed data that are produced when a radar transmitter emits a beam of micro or millimeter waves, the waves reflect from the surfaces they strike, and the backscattered radiation is detected by the radar system’s receiving antenna, which is tuned to the frequency of the transmitted waves.

RADARSAT—a Canadian radar satellite.

radiative transfer equations—the mathematical models that attempt to quantify the total atmospheric effect of solar illumination.

radiometric correction—the correction of variations in data that are not caused by the object or scene being scanned, such as scanner malfunction and atmospheric interference.

radiometric enhancement—an enhancement technique that deals with the individual values of pixels in an image.

radiometric resolution—the dynamic range, or number of possible data file values, in each band. This is referred to by the number of bits into which the recorded energy is divided. See pixel depth.

RAM—see random-access memory.

random-access memory—(RAM) memory used for applications and data storage on a CPU (Free On-Line Dictionary of Computing, 1999d).

rank—a neighborhood analysis technique that outputs the number of values in a user-specified window that are less than the analyzed value.

RAR—see Real-Aperture Radar.

raster data—data that are organized in a grid of columns and rows. Raster data usually represent a planar graph or geographical area. Raster data in ERDAS IMAGINE are stored in image files.

raster object—in Model Maker (Spatial Modeler), a single raster layer or set of layers.

Raster Product Format—(RPF) Data from NIMA, used primarily for military purposes. Organized in 1536 × 1536 frames, with an internal tile size of 256 × 256 pixels.

raster region—a contiguous group of pixels in one GIS class. Also called clump.

ratio data—a data type in which thematic class values have the same properties as interval values, except that ratio values have a natural zero or starting point.

RDBMS—see relational database management system.

Page 789: ERDAS Field Guide

Glossary 759

RDGPS—see Real Time Differential GPS.

Real-Aperture Radar—(RAR) a radar sensor that uses its side-looking, fixed antenna to transmit and receive the radar impulse. For a given position in space, the resolution of the resultant image is a function of the antenna size. The signal is processed independently of subsequent return signals.

Real Time Differential GPS—(RDGPS) takes the Differential Correction technique one step further by having the base station communicate the error vector via radio to the field unit in real time.

recoding—the assignment of new values to one or more classes.

record—1. the set of all attribute data for one class of feature. 2. the basic storage unit on a 9-track tape.

rectification—the process of making image data conform to a map projection system. In many cases, the image must also be oriented so that the north direction corresponds to the top of the image.

rectified coordinates—the coordinates of a pixel in a file that has been rectified, which are extrapolated from the GCPs. Ideally, the rectified coordinates for the GCPs are exactly equal to the reference coordinates. Because there is often some error tolerated in the rectification, this is not always the case.

reduce—the process of skipping file pixels when displaying an image so that a larger area can be represented on the display screen. For example, a reduction factor of 3 would cause only the pixel at every third row and column to be displayed, so that each displayed pixel represents a 3 × 3 block of file pixels.

reference coordinates—the coordinates of the map or reference image to which a source (input) image is being registered. GCPs consist of both input coordinates and reference coordinates for each point.

reference pixels—in classification accuracy assessment, pixels for which the correct GIS class is known from ground truth or other data. The reference pixels can be selected by you, or randomly selected.

reference plane—In a topocentric coordinate system, the tangential plane at the center of the image on the Earth ellipsoid, on which the three perpendicular coordinate axes are defined.

reference system—the map coordinate system to which an image is registered.

Page 790: ERDAS Field Guide

760 Glossary

reference window—the source window on the first image of an image pair, which remains at a constant location. See also correlation windows and search windows.

reflection spectra—the electromagnetic radiation wavelengths that are reflected by specific materials of interest.

registration—the process of making image data conform to another image. A map coordinate system is not necessarily involved.

regular block of photos—a rectangular block in which the number of photos in each strip is the same; this includes a single strip or a single stereopair.

relational database management system—(RDBMS) system that stores SDE database layers.

relation based matching—an image matching technique that uses the image features and the relation among the features to automatically recognize the corresponding image structures without any a priori information.

relief map—a map that appears to be or is three-dimensional.

remote sensing—the measurement or acquisition of data about an object or scene by a satellite or other instrument above or far from the object. Aerial photography, satellite imagery, and radar are all forms of remote sensing.

replicative symbol—an annotation symbol that is designed to look like its real-world counterpart. These symbols are often used to represent trees, railroads, houses, etc.

representative fraction—the ratio or fraction used to denote map scale.

resampling—the process of extrapolating data file values for the pixels in a new grid when data have been rectified or registered to another image.

rescaling—the process of compressing data from one format to another. In ERDAS IMAGINE, this typically means compressing a 16-bit file to an 8-bit file.

reshape—the process of redigitizing a portion of a line.

residuals—in rectification, the distances between the source and retransformed coordinates in one direction. In ERDAS IMAGINE, they are shown for each GCP. The X residual is the distance between the source X coordinate and the retransformed X coordinate. The Y residual is the distance between the source Y coordinate and the retransformed Y coordinate.

Page 791: ERDAS Field Guide

Glossary 761

resolution—a level of precision in data. For specific types of resolution see display resolution, radiometric resolution, spatial resolution, spectral resolution, and temporal resolution.

resolution merging—the process of sharpening a lower-resolution multiband image by merging it with a higher-resolution monochrome image.

retransformed—in the rectification process, a coordinate in the reference (output) coordinate system that has transformed back into the input coordinate system. The amount of error in the transformation can be determined by computing the difference between the original coordinates and the retransformed coordinates. See RMS error.

RGB—red, green, blue. The primary additive colors that are used on most display hardware to display imagery.

RGB clustering—a clustering method for 24-bit data (three 8-bit bands) that plots pixels in three-dimensional spectral space, and divides that space into sections that are used to define clusters. The output color scheme of an RGB-clustered image resembles that of the input file.

rhumb line—a line of true direction that crosses meridians at a constant angle.

right-hand rule—a convention in three-dimensional coordinate systems (X,Y,Z) that determines the location of the positive Z axis. If you place your right hand fingers on the positive X axis and curl your fingers toward the positive Y axis, the direction your thumb is pointing is the positive Z axis direction.

RMS error—the distance between the input (source) location of a GCP and the retransformed location for the same GCP. RMS error is calculated with a distance equation.

RMSE—(Root Mean Square Error) used to measure how well a specific calculated solution fits the original data. For each observation of a phenomena, a variation can be computed between the actual observation and a calculated value. (The method of obtaining a calculated value is application-specific.) Each variation is then squared. The sum of these squared values is divided by the number of observations and then the square root is taken. This is the RMSE value.

roam—the process of moving across a display so that different areas of the image appear on the display screen.

root—the first part of a file name, which usually identifies the file’s specific contents.

Page 792: ERDAS Field Guide

762 Glossary

ROYGBIV—a color scheme ranging through red, orange, yellow, green, blue, indigo, and violet at regular intervals.

RPF—see Raster Product Format.

rubber sheeting—the application of a nonlinear rectification (2nd-order or higher).

S sample—see training sample.

SAR—see Synthetic Aperture Radar.

saturation—a component of IHS which represents the purity of color and also varies linearly from 0 to 1.

SCA—see suitability/capability analysis.

scale—1. the ratio of distance on a map as related to the true distance on the ground. 2. cell size. 3. the processing of values through a lookup table.

scale bar—a graphic annotation element that describes map scale. It shows the distance on paper that represents a geographical distance on the map.

scalar object—in Model Maker (Spatial Modeler), a single numeric value.

scaled map—a georeferenced map that is accurately arranged and referenced to represent distances and locations. A scaled map usually has a legend that includes a scale, such as 1 inch = 1000 feet. The scale is often expressed as a ratio like 1:12,000 where 1 inch on the map equals 12,000 inches on the ground.

scanner—the entire data acquisition system, such as the Landsat TM scanner or the SPOT panchromatic scanner.

scanning—1. the transfer of analog data, such as photographs, maps, or another viewable image, into a digital (raster) format. 2. a process similar to convolution filtering that uses a kernel for specialized neighborhood analyses, such as total, average, minimum, maximum, boundary, and majority.

scatterplot—a graph, usually in two dimensions, in which the data file values of one band are plotted against the data file values of another band.

scene—the image captured by a satellite.

screen coordinates—the location of a pixel on the display screen, beginning with 0,0 in the upper left corner.

Page 793: ERDAS Field Guide

Glossary 763

screen digitizing—the process of drawing vector graphics on the display screen with a mouse. A displayed image can be used as a reference.

script modeling—the technique of combining data layers in an unlimited number of ways. Script modeling offers all of the capabilities of graphical modeling with the ability to perform more complex functions, such as conditional looping.

script model—a model that is comprised of text only and is created with the SML. Script models are stored in .mdl files.

SCS—see Soil Conservation Service.

SD—see standard deviation.

SDE—see Spatial Database Engine.

SDTS—see spatial data transfer standard.

SDTS Raster Profile and Extensions—(SRPE) an SDTS profile that covers gridded raster data.

search radius—in surfacing routines, the distance around each pixel within which the software searches for terrain data points.

search windows—candidate windows on the second image of an image pair that are evaluated relative to the reference window.

seat—a combination of an X-server and a host workstation.

Sea-viewing Wide Field-of-View Sensor—(SeaWiFS) a sensor located on many different satellites such as ORBVIEW’s OrbView-2, and NASA’s SeaStar.

SeaWiFS—see Sea-viewing Wide Field-of-View Sensor.

secant—the intersection of two points or lines. In the case of conic or cylindrical map projections, a secant cone or cylinder intersects the surface of a globe at two circles.

Selective Availability—introduces a positional inaccuracy of up to 100 m to commercial GPS receivers.

sensor—a device that gathers energy, converts it to a digital value, and presents it in a form suitable for obtaining information about the environment.

separability—a statistical measure of distance between two signatures.

separability listing—a report of signature divergence which lists the computed divergence for every class pair and one band combination. The listing contains every divergence value for the bands studied for every possible pair of signatures.

Page 794: ERDAS Field Guide

764 Glossary

sequential clustering—a method of clustering that analyzes pixels of an image line by line and groups them by spectral distance. Clusters are determined based on relative spectral distance and the number of pixels per cluster.

server—on a computer in a network, a utility that makes some resource or service available to the other machines on the network (such as access to a tape drive).

shaded relief image—a thematic raster image that shows variations in elevation based on a user-specified position of the sun. Areas that would be in sunlight are highlighted and areas that would be in shadow are shaded.

shaded relief map—a map of variations in elevation based on a user-specified position of the sun. Areas that would be in sunlight are highlighted and areas that would be in shadow are shaded.

shapefile—an ESRI vector format that contains spatial data. Shapefiles have the .shp extension.

short wave infrared region—(SWIR) the near-infrared and middle-infrared regions of the electromagnetic spectrum.

SI—see image scale.

Side-looking Airborne Radar—(SLAR) a radar sensor that uses an antenna which is fixed below an aircraft and pointed to the side to transmit and receive the radar signal.

signal based matching—see area based matching.

Signal-to-Noise ratio—(S/N) in hyperspectral image processing, a ratio used to evaluate the usefulness or validity of a particular band of data.

signature—a set of statistics that defines a training sample or cluster. The signature is used in a classification process. Each signature corresponds to a GIS class that is created from the signatures with a classification decision rule.

skew—a condition in satellite data, caused by the rotation of the Earth eastward, which causes the position of the satellite relative to the Earth to move westward. Therefore, each line of data represents terrain that is slightly west of the data in the previous line.

SLAR—see Side-looking Airborne Radar.

slope—the change in elevation over a certain distance. Slope can be reported as a percentage or in degrees.

slope image—a thematic raster image that shows changes in elevation over distance. Slope images are usually color coded to show the steepness of the terrain at each pixel.

Page 795: ERDAS Field Guide

Glossary 765

slope map—a map that is color coded to show changes in elevation over distance.

small-scale—for a map or data file, having a small ratio between the area of the imagery (such as inches or pixels) and the area that is represented (such as feet). In small-scale image data, each pixel represents a large area on the ground, such as NOAA AVHRR data, with a spatial resolution of 1.1 km.

SML—see Spatial Modeler Language.

S/N—see Signal-to-Noise ratio.

softcopy photogrammetry—see digital photogrammetry.

Soil Conservation Service—(SCS) an organization that produces soil maps (Fisher, 1991) with guidelines provided by the USDA.

SOM—see Space Oblique Mercator.

source coordinates—in the rectification process, the input coordinates.

Spaceborne Imaging Radar—(SIR-A, SIR-B, and SIR-C) radar sensors that flew on-board NASA space shuttles. SIR-A flew on-board the 1981 NASA Space Shuttle Columbia. SIR-B was flown on-board the shuttle Challenger in 1984. The SIR-C sensor flew twice in 1994, first on-board shuttle mission STS-59 and later on-board shuttle mission STS-68. SIR-C was the first polarimetric spaceborne SAR and first X-band sensor. The data from the Space Shuttle missions are valuable sources of radar data. (National Aeronautics and Space Administration, 2006)

Space Oblique Mercator—(SOM) a projection available in ERDAS IMAGINE that is nearly conformal and has little scale distortion within the sensing range of an orbiting mapping satellite such as Landsat.

spatial data transfer standard—(SDTS) “a robust way of transferring Earth-referenced spatial data between dissimilar computer systems with the potential for no information loss” (United States Geological Survey, 1999c).

Spatial Database Engine—(SDE) An ESRI vector format that manages a database theme. SDE allows you to access databases that may contain large amounts of information (Environmental Systems Research Institute, 1996).

spatial enhancement—the process of modifying the values of pixels in an image relative to the pixels that surround them.

spatial frequency—the difference between the highest and lowest values of a contiguous set of pixels.

Page 796: ERDAS Field Guide

766 Glossary

Spatial Modeler Language—(SML) a script language used internally by Model Maker (Spatial Modeler) to execute the operations specified in the graphical models you create. SML can also be used to write application-specific models.

spatial resolution—a measure of the smallest object that can be resolved by the sensor, or the area on the ground represented by each pixel.

speckle noise—the light and dark pixel noise that appears in radar data.

spectral distance—the distance in spectral space computed as Euclidean distance in n-dimensions, where n is the number of bands.

spectral enhancement—the process of modifying the pixels of an image based on the original values of each pixel, independent of the values of surrounding pixels.

spectral resolution—the specific wavelength intervals in the electromagnetic spectrum that a sensor can record.

spectral space—an abstract space that is defined by spectral units (such as an amount of electromagnetic radiation). The notion of spectral space is used to describe enhancement and classification techniques that compute the spectral distance between n-dimensional vectors, where n is the number of bands in the data.

spectroscopy—the study of the absorption and reflection of electromagnetic radiation (EMR) waves.

spliced map—a map that is printed on separate pages, but intended to be joined together into one large map. Neatlines and tick marks appear only on the pages which make up the outer edges of the whole map.

spline—the process of smoothing or generalizing all currently selected lines using a specified grain tolerance during vector editing.

split—the process of making two lines from one by adding a node.

SPOT—a series of Earth-orbiting satellites operated by the Centre National d’Etudes Spatiales (CNES) of France.

SRPE—see SDTS Raster Profile and Extensions.

STA—see statistics file.

Page 797: ERDAS Field Guide

Glossary 767

standard deviation—(SD) 1. the square root of the variance of a set of values which is used as a measurement of the spread of the values. 2. a neighborhood analysis technique that outputs the standard deviation of the data file values of a user-specified window.

standard meridian—see standard parallel.

standard parallel—the line of latitude where the surface of a globe conceptually intersects with the surface of the projection cylinder or cone.

statement—in script models, properly formatted lines that perform a specific task in a model. Statements fall into the following categories: declaration, assignment, show, view, set, macro definition, and quit.

statistical clustering—a clustering method that tests 3 × 3 sets of pixels for homogeneity, and builds clusters only from the statistics of the homogeneous sets of pixels.

statistics file—(STA) an ERDAS IMAGINE Ver. 7.X trailer file for LAN data that contains statistics about the data.

stereographic—1. the process of projecting onto a tangent plane from the opposite side of the Earth. 2. the process of acquiring images at angles on either side of the vertical.

stereopair—a set of two remotely-sensed images that overlap, providing two views of the terrain in the overlap area.

stereo-scene—achieved when two images of the same area are acquired on different days from different orbits, one taken east of the vertical, and the other taken west of the nadir.

stream mode—a digitizing mode in which vertices are generated continuously while the digitizer keypad is in proximity to the surface of the digitizing tablet.

string—a line of text. A string usually has a fixed length (number of characters).

strip of photographs—consists of images captured along a flight-line, normally with an overlap of 60% for stereo coverage. All photos in the strip are assumed to be taken at approximately the same flying height and with a constant distance between exposure stations. Camera tilt relative to the vertical is assumed to be minimal.

striping—a data error that occurs if a detector on a scanning system goes out of adjustment—that is, it provides readings consistently greater than or less than the other detectors for the same band over the same ground cover. Also called banding.

Page 798: ERDAS Field Guide

768 Glossary

structure based matching—see relation based matching.

subsetting—the process of breaking out a portion of a large image file into one or more smaller files.

suitability/capability analysis—(SCA) a system designed to analyze many data layers to produce a plan map. Discussed in McHarg’s book Design with Nature (Star and Estes, 1990).

sum—a neighborhood analysis technique that outputs the total of the data file values in a user-specified window.

Sun raster data—imagery captured from a Sun monitor display.

sun-synchronous—a term used to describe Earth-orbiting satellites that rotate around the Earth at the same rate as the Earth rotates on its axis.

supervised training—any method of generating signatures for classification, in which the analyst is directly involved in the pattern recognition process. Usually, supervised training requires the analyst to select training samples from the data that represent patterns to be classified.

surface—a one-band file in which the value of each pixel is a specific elevation value.

swath width—in a satellite system, the total width of the area on the ground covered by the scanner.

SWIR—see short wave infrared region.

symbol—an annotation element that consists of other elements (sub-elements). See plan symbol, profile symbol, and function symbol.

symbolization—a method of displaying vector data in which attribute information is used to determine how features are rendered. For example, points indicating cities and towns can appear differently based on the population field stored in the attribute database for each of those areas.

Synthetic Aperture Radar—(SAR) a radar sensor that uses its side-looking, fixed antenna to create a synthetic aperture. SAR sensors are mounted on satellites, aircraft, and the NASA Space Shuttle. The sensor transmits and receives as it is moving. The signals received over a time interval are combined to create the image.

T table object—in Model Maker (Spatial Modeler), a series of numeric values or character strings.

Page 799: ERDAS Field Guide

Glossary 769

tablet digitizing—the process of using a digitizing tablet to transfer nondigital data such as maps or photographs to vector format.

Tagged Imaged File Format—see TIFF data.

tangent—an intersection at one point or line. In the case of conic or cylindrical map projections, a tangent cone or cylinder intersects the surface of a globe in a circle.

Tasseled Cap transformation—an image enhancement technique that optimizes data viewing for vegetation studies.

TEM—see transverse electromagnetic wave.

temporal resolution—the frequency with which a sensor obtains imagery of a particular area.

terrain analysis—the processing and graphic simulation of elevation data.

terrain data—elevation data expressed as a series of x, y, and z values that are either regularly or irregularly spaced.

text printer—a device used to print characters onto paper, usually used for lists, documents, and reports. If a color printer is not necessary or is unavailable, images can be printed using a text printer. Also called a line printer.

thematic data—raster data that are qualitative and categorical. Thematic layers often contain classes of related information, such as land cover, soil type, slope, etc. In ERDAS IMAGINE, thematic data are stored in image files.

thematic layer—see thematic data.

thematic map—a map illustrating the class characterizations of a particular spatial variable such as soils, land cover, hydrology, etc.

Thematic Mapper—(TM) Landsat data acquired in seven bands with a spatial resolution of 30 × 30 meters.

thematic mapper simulator—(TMS) an instrument “designed to simulate spectral, spatial, and radiometric characteristics of the Thematic Mapper sensor on the Landsat-4 and 5 spacecraft” (National Aeronautics and Space Administration, 1995b).

theme—a particular type of information, such as soil type or land use, that is represented in a layer.

3D perspective view—a simulated three-dimensional view of terrain.

threshold—a limit, or cutoff point, usually a maximum allowable amount of error in an analysis. In classification, thresholding is the process of identifying a maximum distance between a pixel and the mean of the signature to which it was classified.

Page 800: ERDAS Field Guide

770 Glossary

tick marks—small lines along the edge of the image area or neatline that indicate regular intervals of distance.

tie point—a point; its ground coordinates are not known, but can be recognized visually in the overlap or sidelap area between two images.

TIFF data—Tagged Image File Format data is a raster file format developed by Aldus, Corp. (Seattle, Washington), in 1986 for the easy transportation of data.

TIGER—see Topologically Integrated Geographic Encoding and Referencing System.

tiled data—the storage format of ERDAS IMAGINE image files.

TIN—see triangulated irregular network.

TM—see Thematic Mapper.

TMS—see thematic mapper simulator.

TNDVI—see Transformed Normalized Distribution Vegetative Index.

to-node—the last vertex in a line.

topocentric coordinate system—a coordinate system that has its origin at the center of the image on the Earth ellipsoid. The three perpendicular coordinate axes are defined on a tangential plane at this center point. The x-axis is oriented eastward, the y-axis northward, and the z-axis is vertical to the reference plane (up).

topographic—a term indicating elevation.

topographic data—a type of raster data in which pixel values represent elevation.

topographic effect—a distortion found in imagery from mountainous regions that results from the differences in illumination due to the angle of the sun and the angle of the terrain.

topographic map—a map depicting terrain relief.

Topologically Integrated Geographic Encoding and Referencing System—(TIGER) files are line network products of the US Census Bureau.

Topological Vector Profile—(TVP) a profile of SDTS that covers attributed vector data.

topology—a term that defines the spatial relationships between features in a vector layer.

Page 801: ERDAS Field Guide

Glossary 771

total RMS error—the total root mean square (RMS) error for an entire image. Total RMS error takes into account the RMS error of each GCP.

trailer file—1. an ERDAS IMAGINE Ver. 7.X file with a .TRL extension that accompanies a GIS file and contains information about the GIS classes. 2. a file following the image data on a 9-track tape.

training—the process of defining the criteria by which patterns in image data are recognized for the purpose of classification.

training field—the geographical area represented by the pixels in a training sample. Usually, it is previously identified with the use of ground truth data or aerial photography. Also called training site.

training sample—a set of pixels selected to represent a potential class. Also called sample.

transformation matrix—a set of coefficients that is computed from GCPs, and used in polynomial equations to convert coordinates from one system to another. The size of the matrix depends upon the order of the transformation.

Transformed Normalized Distribution Vegetative Index—(TNDVI) adds 0.5 to the NDVI equation, then takes the square root. Created by Deering et al in 1975 (Jensen, 1996).

transposition—the interchanging of the rows and columns of a matrix, denoted with T.

transverse aspect—the orientation of a map in which the central line of the projection, which is normally the equator, is rotated 90 degrees so that it follows a meridian.

transverse electromagnetic wave—(TEM) a wave where both E (electric field) and H (magnetic field) are transverse to the direction of propagation.

triangulated irregular network—(TIN) a specific representation of DTMs in which elevation points can occur at irregular intervals.

triangulation—establishes the geometry of the camera or sensor relative to objects on the Earth’s surface.

true color—a method of displaying an image (usually from a continuous raster layer) that retains the relationships between data file values and represents multiple bands with separate color guns. The image memory values from each displayed band are translated through the function memory of the corresponding color gun.

Page 802: ERDAS Field Guide

772 Glossary

true direction—the property of a map projection to represent the direction between two points with a straight rhumb line, which crosses meridians at a constant angle.

TVP—see Topological Vector Profile.

U union—the area or set that is the combination of two or more input areas or sets without repetition.

United Sates Department of Agriculture—(USDA) an organization regulating the agriculture of the US. For more information, visit the web site www.usda.gov.

United States Geological Survey—(USGS) an organization dealing with biology, geology, mapping, and water. For more information, visit the web site www.usgs.gov.

Universal Polar Stereographic—(UPS) a mapping system used in conjunction with the Polar Stereographic projection that makes the scale factor at the pole 0.994.

Universal Transverse Mercator—(UTM) UTM is an international plane (rectangular) coordinate system developed by the US Army that extends around the world from 84°N to 80°S. The world is divided into 60 zones each covering six degrees longitude. Each zone extends three degrees eastward and three degrees westward from its central meridian. Zones are numbered consecutively west to east from the 180° meridian.

unscaled map—a hardcopy map that is not referenced to any particular scale in which one file pixel is equal to one printed pixel.

unsplit—the process of joining two lines by removing a node.

unsupervised training—a computer-automated method of pattern recognition in which some parameters are specified by the user and are used to uncover statistical patterns that are inherent in the data.

UPS—see Universal Polar Stereographic.

USDA—see United States Department of Agriculture.

USGS—see United States Geological Survey.

UTM—see Universal Transverse Mercator.

V variable—1. a numeric value that is changeable, usually represented with a letter. 2. a thematic layer. 3. one band of a multiband image. 4. in models, objects which have been associated with a name using a declaration statement.

Page 803: ERDAS Field Guide

Glossary 773

variable rate technology—(VRT) in precision agriculture, used with GPS data. VRT relies on the use of a VRT controller box connected to a GPS and the pumping mechanism for a tank full of fertilizers/pesticides/seeds/water/etc.

variance—the measure of central tendency.

vector—1. a line element. 2. a one-dimensional matrix, having either one row (1 by j), or one column (i by 1). See also mean vector, measurement vector.

vector data—data that represent physical forms (elements) such as points, lines, and polygons. Only the vertices of vector data are stored, instead of every point that makes up the element. ERDAS IMAGINE vector data are based on the ArcInfo data model and are stored in directories, rather than individual files. See workspace.

vector layer—a set of vector features and their associated attributes.

Vector Quantization—(VQ) used to compress frames of RPF data.

velocity vector—the satellite’s velocity if measured as a vector through a point on the spheroid.

verbal statement—a statement that describes the distance on the map to the distance on the ground. A verbal statement describing a scale of 1:1,000,000 is approximately 1 inch to 16 miles. The units on the map and on the ground do not have to be the same in a verbal statement.

vertex—a point that defines an element, such as a point where a line changes direction.

vertical control—the vertical distribution of GCPs in aerial triangulation (z - elevation).

vertices—plural of vertex.

viewshed analysis—the calculation of all areas that can be seen from a particular viewing point or path.

viewshed map—a map showing only those areas visible (or invisible) from a specified point(s).

VIS/IR—see visible/infrared imagery.

visible/infrared imagery—(VIS/IR) a type of multispectral data set that is based on the reflectance spectrum of the material of interest.

volume—a medium for data storage, such as a magnetic disk or a tape.

volume set—the complete set of tapes that contains one image.

VPF—see vector product format.

Page 804: ERDAS Field Guide

774 Glossary

VQ—see Vector Quantization.

VRT—see variable rate technology.

W wavelet—”a waveform that is bounded in both frequency and duration” (Free On-Line Dictionary of Computing, 1999e).

weight—the number of values in a set; particularly, in clustering algorithms, the weight of a cluster is the number of pixels that have been averaged into it.

weighting factor—a parameter that increases the importance of an input variable. For example, in GIS indexing, one input layer can be assigned a weighting factor that multiplies the class values in that layer by that factor, causing that layer to have more importance in the output file.

weighting function—in surfacing routines, a function applied to elevation values for determining new output values.

WGS—see World Geodetic System.

Wide Field Sensor—(WiFS) sensor aboard IRS-1C with 188m spatial resolution.

WiFS—see Wide Field Sensor.

working window—the image area to be used in a model. This can be set to either the union or intersection of the input layers.

workspace—a location that contains one or more vector layers. A workspace is made up of several directories.

World Geodetic System—(WGS) a spheroid. Earth ellipsoid with multiple versions including: WGS 66, 72, and 84.

write ring—a protection device that allows data to be written to a 9-track tape when the ring is in place, but not when it is removed.

X X residual—in RMS error reports, the distance between the source X coordinate and the retransformed X coordinate.

X RMS error—the root mean square (RMS) error in the X direction.

Y Y residual—in RMS error reports, the distance between the source Y coordinate and the retransformed Y coordinate.

Y RMS error—the root mean square (RMS) error in the Y direction.

Page 805: ERDAS Field Guide

Glossary 775

Z ZDR—see zone distribution rectangles.

zero-sum kernel—a convolution kernel in which the sum of all the coefficients is zero. Zero-sum kernels are usually edge detectors.

zone distribution rectangles—(ZDRs) the images into which each distribution DR are divided in ADRG data.

zoom—the process of expanding displayed pixels on an image so they can be more closely studied. Zooming is similar to magnification, except that it changes the display only temporarily, leaving image memory the same.

Page 806: ERDAS Field Guide

776 Glossary

Page 807: ERDAS Field Guide

777Bibliography

Bibliography 777

Bibliography

Works CitedAckermann, 1983

Ackermann, F., 1983. High precision digital image correlation. Paper presented at 39th Photogrammetric Week, Institute of Photogrammetry, University of Stuttgart, 231-243.

Adams et al, 1989

Adams, J.B., M. O. Smith, and A. R. Gillespie. 1989. Simple Models for Complex Natural Surfaces: A Strategy for the Hyperspectral Era of Remote Sensing. Paper presented at Institute of Electrical and Electronics Engineers, Inc. (IEEE) International Geosciences and Remote Sensing (IGARSS) 12th Canadian Symposium on Remote Sensing, Vancouver, British Columbia, Canada, July 1989, I:16-21.

Agouris and Schenk, 1996

Agouris, P., and T. Schenk. 1996. Automated Aerotriangulation Using Multiple Image Multipoint Matching. Photogrammetric Engineering and Remote Sensing 62 (6): 703-710.

Akima, 1978

Akima, H. 1978. A Method of Bivariate Interpolation and Smooth Surface Fitting for Irregularly Distributed Data Points. Association for Computing Machinery (ACM) Transactions on Mathematical Software 4 (2): 148-159.

American Society of Photogrammetry, 1980

American Society of Photogrammetry (ASP). 1980. Photogrammetric Engineering and Remote Sensing XLVI:10:1249.

Atkinson, 1985

Atkinson, P. 1985. Preliminary Results of the Effect of Resampling on Thematic Mapper Imagery. 1985 ACSM-ASPRS Fall Convention Technical Papers. Falls Church, Virginia: American Society for Photogrammetry and Remote Sensing and American Congress on Surveying and Mapping.

Atlantis Scientific, Inc., 1997

Atlantis Scientific, Inc. 1997. Sources of SAR Data. Retrieved October 2, 1999, from http://www.atlsci.com/library/sar_sources.html

Bauer and Müller, 1972

Bauer, H., and J. Müller. 1972. Height accuracy of blocks and bundle block adjustment with additional parameters. International Society for Photogrammetry and Remote Sensing (ISPRS) 12th Congress, Ottawa.

Benediktsson et al, 1990

Benediktsson, J.A., P. H. Swain, O. K. Ersoy, and D. Hong 1990. Neural Network Approaches Versus Statistical Methods in Classification of Multisource Remote Sensing Data. Institute of Electrical and Electronics Engineers, Inc. (IEEE) Transactions on Geoscience and Remote Sensing 28 (4): 540-551.

Berk et al, 1989

Berk, A., L. S. Bernstein, and D. C. Robertson. 1989. MODTRAN: A Moderate Resolution Model for LOWTRAN 7. Airforce Geophysics Laboratory Technical Report GL-TR-89-0122, Hanscom AFB, MA.

Bernstein, 1983

Bernstein, R. 1983. Image Geometry and Rectification. Chapter 21 in Manual of Remote Sensing. Ed. R. N. Colwell. Falls Church, Virginia: American Society of Photogrammetry.

Page 808: ERDAS Field Guide

778 Bibliography

Blom and Daily, 1982

Blom, R. G., and M. Daily. 1982. Radar Image Processing for Rock-Type Discrimination. Institute of Electrical and Electronics Engineers, Inc. (IEEE) Transactions on Geoscience and Remote Sensing 20 (3).

Buchanan, 1979

Buchanan, M. D. 1979. Effective Utilization of Color in Multidimensional Data Presentation. Paper presented at the Society of Photo-Optical Engineers, 199:9-19.

Cannon, 1983

Cannon, T. M. 1983. Background Pattern Removal by Power Spectral Filtering. Applied Optics 22 (6): 777-779.

Center for Health Applications of Aerospace Related Technologies, 1998

Center for Health Applications of Aerospace Related Technologies (CHAART), The. 1998. Sensor Specifications: SeaWiFS. Retrieved December 28, 2001, from http://geo.arc.nasa.gov/sge/health/sensor/sensors/seastar.html

Center for Health Applications of Aerospace Related Technologies, 2000a

———. 2000a. Sensor Specifications: Ikonos. Retrieved December 28, 2001, from http://geo.arc.nasa.gov/sge/health/sensor/sensors/ikonos.html

Center for Health Applications of Aerospace Related Technologies, 2000b

———. 2000b. Sensor Specifications: Landsat. Retrieved December 31, 2001, from http://geo.arc.nasa.gov/sge/health/sensor/sensors/landsat.html

Center for Health Applications of Aerospace Related Technologies, 2000c

———. 2000c. Sensor Specifications: SPOT. Retrieved December 31, 2001, from http://geo.arc.nasa.gov/sge/health/sensor/sensors/spot.html

Centre National D’Etudes Spatiales, 1998

Centre National D’Etudes Spatiales (CNES). 1998. CNES: Centre National D’Etudes Spatiales. Retrieved October 25, 1999, from http://sads.cnes.fr/ceos/cdrom-98/ceos1/cnes/gb/lecnes.htm

Chahine et al, 1983

Chahine, M. T., D. J. McCleese, P. W. Rosenkranz, and D. H. Staelin. 1983. Interaction Mechanisms within the Atmosphere. Chapter 5 in Manual of Remote Sensing. Ed. R. N. Colwell. Falls Church, Virginia: American Society of Photogrammetry.

Chavez et al, 1977

Chavez, P. S., Jr., G. L. Berlin, and W. B. Mitchell. 1977. Computer Enhancement Techniques of Landsat MSS Digital Images for Land Use/Land Cover Assessments. Remote Sensing Earth Resource. 6:259.

Chavez and Berlin, 1986

Chavez, P. S., Jr., and G. L. Berlin. 1986. Restoration Techniques for SIR-B Digital Radar Images. Paper presented at the Fifth Thematic Conference: Remote Sensing for Exploration Geology, Reno, Nevada, September/October 1986.

Chavez et al, 1991

Chavez, P. S., Jr., S. C. Sides, and J. A. Anderson. 1991. Comparison of Three Different Methods to Merge Multiresolution and Multispectral Data: Landsat TM and SPOT Panchromatic. Photogrammetric Engineering & Remote Sensing 57 (3): 295-303.

Clark and Roush, 1984

Clark, R. N., and T. L. Roush. 1984. Reflectance Spectroscopy: Quantitative Analysis Techniques for Remote Sensing Applications. Journal of Geophysical Research 89 (B7): 6329-6340.

Page 809: ERDAS Field Guide

Bibliography 779

Clark et al, 1990

Clark, R. N., A. J. Gallagher, and G. A. Swayze. 1990. “Material Absorption Band Depth Mapping of Imaging Spectrometer Data Using a Complete Band Shape Least-Squares Fit with Library Reference Spectra”. Paper presented at the Second Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Conference, Pasadena, California, June 1990. Jet Propulsion Laboratory Publication 90-54:176-186.

Colby, 1991

Colby, J. D. 1991. Topographic Normalization in Rugged Terrain. Photogrammetric Engineering & Remote Sensing 57 (5): 531-537.

Colwell, 1983

Colwell, R. N., ed. 1983. Manual of Remote Sensing. 2d ed. Falls Church, Virginia: American Society of Photogrammetry.

Common, P., 1994

Common, P. 1994. “Independent component analysis, a new concept?,” Signal Processing, vol. 36, pp. 287-314, Apr. 1994.

Congalton, R. 1991

Congalton, R. 1991. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data. Remote Sensing of Environment 37: 35-46.

Conrac Corporation, 1980

Conrac Corporation. 1980. Raster Graphics Handbook. New York: Van Nostrand Reinhold.Crane, 1971

Crane, R. B. 1971. “Preprocessing Techniques to Reduce Atmospheric and Sensor Variability in Multispectral Scanner Data.” Proceedings of the 7th International Symposium on Remote Sensing of Environment. Ann Arbor, Michigan, p. 1345.

Crippen, 1987

Crippen, R. E. 1987. The Regression Intersection Method of Adjusting Image Data for Band Ratioing. International Journal of Remote Sensing 8 (2): 137-155.

Crippen, 1989a

———. 1989a. A Simple Spatial Filtering Routine for the Cosmetic Removal of Scan-Line Noise from Landsat TM P-Tape Imagery. Photogrammetric Engineering & Remote Sensing 55 (3): 327-331.

Crippen, 1989b

———. 1989b. Development of Remote Sensing Techniques for the Investigation of Neotectonic Activity, Eastern Transverse Ranges and Vicinity, Southern California. Ph.D. diss., University of California, Santa Barbara.

Crist et al, 1986

Crist, E. P., R. Laurin, and R. C. Cicone. 1986. Vegetation and Soils Information Contained in Transformed Thematic Mapper Data. Paper presented at International Geosciences and Remote Sensing Symposium (IGARSS)’ 86 Symposium, ESA Publications Division, ESA SP-254.

Crist and Kauth, 1986

Crist, E. P., and R. J. Kauth. 1986. The Tasseled Cap De-Mystified. Photogrammetric Engineering & Remote Sensing 52 (1): 81-86.

Croft (Holcomb), 1993

Croft, F. C., N. L. Faust, and D. W. Holcomb. 1993. Merging Radar and VIS/IR Imagery. Paper presented at the Ninth Thematic Conference on Geologic Remote Sensing, Pasadena, California, February 1993.

Cullen, 1972

Cullen, C. G. 1972. Matrices and Linear Transformations. 2d ed. Reading, Massachusetts: Addison-Wesley Publishing Company.

Page 810: ERDAS Field Guide

780 Bibliography

Daily, 1983

Daily, M. 1983. Hue-Saturation-Intensity Split-Spectrum Processing of Seasat Radar Imagery. Photogrammetric Engineering & Remote Sensing 49 (3): 349-355.

Dent, 1985

Dent, B. D. 1985. Principles of Thematic Map Design. Reading, Massachusetts: Addison-Wesley Publishing Company.

DigitalGlobe, 2008a

DigitalGlobe, 2008a. QuickBird. Retrieved December 8, 2008 from http://www.digitalglobe.com/index.php/85/QuickBird

DigitalGlobe, 2008b

DigitalGlobe, 2008b. WorldView-1. Retrieved December 8, 2008 from http://www.digitalglobe.com/index.php/86/WorldView-1

DigitalGlobe, 2010

DigitalGlobe, 2010. WorldView-2. Retrieved September 7, 2010 from http://www.digitalglobe.com/index.php/88/WorldView-2

DLR (German Aerospace Center). 2008

DLR (German Aerospace Center). 2008. TerraSAR-X Mission. Retrieved December 5, 2008, from http://www.dlr.de/tsx/main/mission_en.htm

Earth Remote Sensing Data Analysis Center, 2000

Earth Remote Sensing Data Analysis Center (ERSDAC). 2000. JERS-1 OPS. Retrieved December 28, 2001, from http://www.ersdac.or.jp/Projects/JERS1/JOPS/JOPS_E.html

Eberlein and Weszka, 1975

Eberlein, R. B., and J. S. Weszka. 1975. Mixtures of Derivative Operators as Edge Detectors. Computer Graphics and Image Processing 4: 180-183.

Ebner, 1976

Ebner, H. 1976. Self-calibrating block adjustment. Bildmessung und Luftbildwesen 44: 128-139. e-GEOS, 2008

e-GEOS S.p.A., 2008. COSMO-SkyMed Mission. Retrieved December 8, 2008 from http://www.e-geos.it/cosmoMission.htm

Elachi, 1987

Elachi, C. 1987. Introduction to the Physics and Techniques of Remote Sensing. New York: John Wiley & Sons.

El-Hakim and Ziemann, 1984

El-Hakim, S.F. and H. Ziemann. 1984. A Step-by-Step Strategy for Gross-Error Detection. Photogrammetric Engineering & Remote Sensing 50 (6): 713-718.

Environmental Systems Research Institute, 1990

Environmental Systems Research Institute, Inc. 1990. Understanding GIS: The ArcInfo Method. Redlands, California: ESRI, Incorporated.

Environmental Systems Research Institute, 1992

———. 1992. ARC Command References 6.0. Redlands. California: ESRI, Incorporated.Environmental Systems Research Institute, 1992

———. 1992. Data Conversion: Supported Data Translators. Redlands, California: ESRI, Incorporated.Environmental Systems Research Institute, 1992

———. 1992. Managing Tabular Data. Redlands, California: ESRI, Incorporated.Environmental Systems Research Institute, 1992

———. 1992. Map Projections & Coordinate Management: Concepts and Procedures. Redlands, California: ESRI, Incorporated.

Environmental Systems Research Institute, 1996

———. 1996. Using ArcView GIS. Redlands, California: ESRI, Incorporated.

Page 811: ERDAS Field Guide

Bibliography 781

Environmental Systems Research Institute, 1997

———. 1997. ArcInfo. Version 7.2.1. ArcInfo HELP. Redlands, California: ESRI, Incorporated.Environmental Systems Research Institute, 2000

———. 1994-2000. Understanding Map Projections. Redlands. California: ESRI, Incorporated.Eurimage, 1998

Eurimage. 1998. JERS-1. Retrieved October 1, 1999, from http://www.eurimage.com/Products/JERS_1.html

European Space Agency, 1995

European Space Agency (ESA). 1995. ERS-2: A Continuation of the ERS-1 Success, by G. Duchossois and R. Zobl. Retrieved October 1, 1999, from http://esapub.esrin.esa.it/bulletin/bullet83/ducho83.htm

European Space Agency, 1997

———. 1997. SAR Mission Planning for ERS-1 and ERS-2, by S. D’Elia and S. Jutz. Retrieved October 1, 1999, from http://esapub.esrin.esa.it/bulletin/bullet90/b90delia.htm

European Space Agency, 2008a

———. 2008a. ERS-1. Nine-year success story comes to an end. Retrieved December 8, 2008 from http://earth.esa.int/ers/eeo/ers_end.html

European Space Agency, 2008b

———. 2008b. Envisat Mission and Instruments. Retrieved December 8, 2008 from http://envisat.esa.int/object/index.cfm?fobjectid=3762 and http://envisat.esa.int/

European Space Agency, 2010a

———. 2010a. RADAR and SAR Glossary. Retrieved January 5, 2010 from http://envisat.esa.int/handbooks/asar/CNTR5-2.htm#eph.asar.gloss.radsar

European Space Agency, 2010b

———. 2010b. FORMOSAT-2 Mission. Retrieved February 22, 2010 from http://earth.esa.int/object/index.cfm?fobjectid=5094

European Space Agency, 2010c

———. 2010c. KOMPSAT-1 Mission and KOMPSAT-2 Mission. Retrieved February 22, 2010 from http://earth.esa.int/object/index.cfm?fobjectid=3749 and http://earth.esa.int/object/index.cfm?fobjectid=5098

Fahnestock and Schowengerdt, 1983

Fahnestock, J. D., and R. A. Schowengerdt. 1983. Spatially Variant Contrast Enhancement Using Local Range Modification. Optical Engineering 22 (3): 378-381.

Faust, 1989

Faust, N. L. 1989. Image Enhancement. Volume 20, Supplement 5 of Encyclopedia of Computer Science and Technology. Ed. A. Kent and J. G. Williams. New York: Marcel Dekker, Inc.

Faust et al, 1991

Faust, N. L., W. Sharp, D. W. Holcomb, P. Geladi, and K. Esbenson. 1991. Application of Multivariate Image Analysis (MIA) to Analysis of TM and Hyperspectral Image Data for Mineral Exploration. Paper presented at the Eighth Thematic Conference on Geologic Remote Sensing, Denver, Colorado, April/May 1991.

Fisher, 1991

Fisher, P. F. 1991. Spatial Data Sources and Data Problems. In Geographical Information Systems: Principles and Applications. Ed. D. J. Maguire, M. F. Goodchild, and D. W. Rhind. New York: Longman Scientific & Technical.

Flaschka, 1969

Flaschka, H. A. 1969. Quantitative Analytical Chemistry: Vol 1. New York: Barnes & Noble, Inc.

Page 812: ERDAS Field Guide

782 Bibliography

Förstner and Gülch, 1987

Förstner, W. and E. Gülch. 1987. A fast operator for detection and precise location of distinct points, corners and centers of circular features. Paper presented at the Intercommission Conference on Fast Processing of Photogrammetric Data, Interlaken, Switzerland, June 1987, 281-305.

Fraser, 1986

Fraser, S. J., et al. 1986. “Targeting Epithermal Alteration and Gossans in Weathered and Vegetated Terrains Using Aircraft Scanners: Successful Australian Case Histories.” Paper presented at the fifth Thematic Conference: Remote Sensing for Exploration Geology, Reno, Nevada.

Free On-Line Dictionary of Computing, 1999a

Free On-Line Dictionary Of Computing. 1999a. American Standard Code for Information Interchange. Retrieved October 25, 1999a, from http://foldoc.doc.ic.ac.uk/foldoc

Free On-Line Dictionary of Computing, 1999b

———. 1999b. central processing unit. Retrieved October 25, 1999, from http://foldoc.doc.ic.ac.uk/foldocFree On-Line Dictionary of Computing, 1999c

———. 1999c. lossy. Retrieved November 11, 1999, from http://foldoc.doc.ic.ac.uk/foldocFree On-Line Dictionary of Computing, 1999d

———. 1999d. random-access memory. Retrieved November 11, 1999, from http://foldoc.doc.ic.ac.uk/foldoc

Free On-Line Dictionary of Computing, 1999e

———. 1999e. wavelet. Retrieved November 11, 1999, from http://foldoc.doc.ic.ac.uk/foldocFrost et al, 1982

Frost, V. S., J. A. Stiles, K. S. Shanmugan, and J. C. Holtzman. 1982. A Model for Radar Images and Its Application to Adaptive Digital Filtering of Multiplicative Noise. Institute of Electrical and Electronics Engineers, Inc. (IEEE) Transactions on Pattern Analysis and Machine Intelligence PAMI-4 (2): 157-166.

GeoEye, 2008

GeoEye, 2008. Imagery Sources and GeoEye-1 Fact Sheet. Retrieved December 8, 2008 from http://www.geoeye.com/CorpSite/products/imagery-sources/Default.aspx#geoeye1 and http://launch.geoeye.com/LaunchSite/about/fact_sheet.aspx

Geotoolkit.org, 2009a

Geotoolkit.org, 2009. Snapshot API document. Retrieved December 4, 2009 from http://www.geotoolkit.org/apidocs/org/geotoolkit/referencing/operation/projection/Stereographic.html

Geotoolkit.org, 2009b

Geotoolkit.org, 2009. Snapshot API document. Retrieved December 4, 2009 from http://javadoc.geotools.fr/2.4/org/geotools/referencing/operation/projection/Krovak.html

Gonzalez and Wintz, 1977

Gonzalez, R. C., and P. Wintz. 1977. Digital Image Processing. Reading, Massachusetts: Addison-Wesley Publishing Company.

Gonzalez and Woods, 2001

Gonzalez, R. and Woods, R., Digital Image Processing. Prentice Hall, NJ, 2001.Green and Craig, 1985

Green, A. A., and M. D. Craig. 1985. Analysis of Aircraft Spectrometer Data with Logarithmic Residuals. Paper presented at the AIS Data Analysis Workshop, Pasadena, California, April 1985. Jet Propulsion Laboratory (JOL) Publication 85 (41): 111-119.

Grün, 1978

Grün, A., 1978. Experiences with self calibrating bundle adjustment. Paper presented at the American Congress on Surveying and Mapping/American Society of Photogrammetry (ACSM-ASP) Convention, Washington, D.C., February/March 1978.

Page 813: ERDAS Field Guide

Bibliography 783

Grün and Baltsavias, 1988

Grün, A., and E. P. Baltsavias. 1988. Geometrically constrained multiphoto matching. Photogrammetric Engineering and Remote Sensing 54 (5): 633-641.

Haralick, 1979

Haralick, R. M. 1979. Statistical and Structural Approaches to Texture. Paper presented at meeting of the Institute of Electrical and Electronics Engineers, Inc. (IEEE), Seattle, Washington, May 1979, 67 (5): 786-804.

Heipke, 1996

Heipke, C. 1996. Automation of interior, relative and absolute orientation. International Archives of Photogrammetry and Remote Sensing 31(B3): 297-311.

Helava, 1988

Helava, U.V. 1988. Object space least square correlation. International Archives of Photogrammetry and Remote Sensing 27 (B3): 321-331.

Hodgson and Shelley, 1994

Hodgson, M. E., and B. M. Shelley. 1994. Removing the Topographic Effect in Remotely Sensed Imagery. ERDAS Monitor, 6 (1): 4-6.

Hord, 1982

Hord, R. M. 1982. Digital Image Processing of Remotely Sensed Data. New York: Academic Press. ImageSat International N.V. 2008

ImageSat International N.V. 2008. EROS Satellite Overview, and Product definition. Retrieved February 22, 2010 from http://www.imagesatintl.com.

Iron and Petersen, 1981

Iron, J. R., and G. W. Petersen. 1981. Texture Transforms of Remote Sensing Data. Remote Sensing of Environment 11:359-370.

Jacobsen, 1980

Jacobsen, K. 1980. Vorschläge zur Konzeption und zur Bearbeitung von Bündelblockausgleichungen. Ph.D. dissertation, wissenschaftliche Arbeiten der Fachrichtung Vermessungswesen der Universität Hannover, No. 102.

Jacobsen, 1982

———. 1982. Programmgesteuerte Auswahl zusäetzlicher Parameter. Bildmessung und Luftbildwesen, p. 213-217.

Jacobsen, 1984

———. 1984. Experiences in blunder detection for Aerial Triangulation. Paper presented at International Society for Photogrammetry and Remote Sensing (ISPRS) 15th Congress, Rio de Janeiro, Brazil, June 1984.

Japan Aerospace Exploration Agency, 2003.

Japan Aerospace Exploration Agency, 2003. ALOS Overview. Retrieved December 3, 2008 from http://www.eorc.jaxa.jp/hatoyama/satellite/satdata/alos_e.html.

Japan Aerospace Exploration Agency, 2003a

Japan Aerospace Exploration Agency, 2003a. About ALOS. Retrieved December 4, 2008 from http://www.eorc.jaxa.jp/ALOS/about/avnir2.htm.

Japan Aerospace Exploration Agency, 2003b

Japan Aerospace Exploration Agency, 2003b. About ALOS. Retrieved December 4, 2008 from http://www.eorc.jaxa.jp/ALOS/about/palsar.htm.

Japan Aerospace Exploration Agency, 2003c

Japan Aerospace Exploration Agency, 2003b. About ALOS. Retrieved December 4, 2008 from http://www.eorc.jaxa.jp/ALOS/about/prism.htm.

Page 814: ERDAS Field Guide

784 Bibliography

Japan Aerospace Exploration Agency, 2007

Japan Aerospace Exploration Agency, 2007. Japanese Earth Resources Satellite (JERS-1) Operation Finished. Retrieved December 3, 2008 from http://www.jaxa.jp/projects/sat/jers1/index_e.html

Jensen, 1986

Jensen, J. R. 1986. Introductory Digital Image Processing: A Remote Sensing Perspective. Englewood Cliffs, New Jersey: Prentice-Hall.

Jensen, 1996

Jensen, J. R. 1996. Introductory Digital Image Processing: A Remote Sensing Perspective. 2d ed. Englewood Cliffs, New Jersey: Prentice-Hall.

Jensen et al, 1983

Jensen, J. R., et al. 1983. Urban/Suburban Land Use Analysis. Chapter 30 in Manual of Remote Sensing. Ed. R. N. Colwell. Falls Church, Virginia: American Society of Photogrammetry.

Johnston, 1980

Johnston, R. J. 1980. Multivariate Statistical Analysis in Geography: A Primer on the General Linear Model. Essex, England: Longman Group Ltd.

Jordan and Beck, 1999

Jordan, L. E., III, and L. Beck. 1999. NITFS—The National Imagery Transmission Format Standard. Atlanta, Georgia: ERDAS, Inc.

Kidwell, 1988

Kidwell, K. B., ed. 1988. NOAA Polar Orbiter Data (TIROS-N, NOAA-6, NOAA-7, NOAA-8, NOAA-9, NOAA-10, and NOAA-11) Users Guide. Washington, DC: National Oceanic and Atmospheric Administration.

King et al, 2001

King, Roger and Wang, Jianwen, “A Wavelet Based Algorithm for Pan Sharpening Landsat 7 Imagery”, 2001.

Kloer, 1994

Kloer, B. R. 1994. Hybrid Parametric/Non-Parametric Image Classification. Paper presented at the ACSM-ASPRS Annual Convention, Reno, Nevada, April 1994.

Kneizys et al, 1988

Kneizys, F. X., E. P. Shettle, L. W. Abreu, J. H. Chettwynd, G. P. Anderson, W. O. Gallery, J. E. A. Selby, and S. A. Clough. 1988. Users Guide to LOWTRAN 7. Hanscom AFB, Massachusetts: Air Force Geophysics Laboratory. AFGL-TR-88-0177.

Konecny, 1994

Konecny, G. 1994. New Trends in Technology, and their Application: Photogrammetry and Remote Sensing—From Analog to Digital. Paper presented at the Thirteenth United Nations Regional Cartographic Conference for Asia and the Pacific, Beijing, China, May 1994.

Konecny and Lehmann, 1984

Konecny, G., and G. Lehmann. 1984. Photogrammetrie. Walter de Gruyter Verlag, Berlin. Kruse, 1988

Kruse, F. A. 1988. Use of Airborne Imaging Spectrometer Data to Map Minerals Associated with Hydrothermally Altered Rocks in the Northern Grapevine Mountains, Nevada and California. Remote Sensing of the Environment 24 (1): 31-51.

Krzystek, 1998

Krzystek, P. 1998. On the use of matching techniques for automatic aerial triangulation. Paper presented at meeting of the International Society for Photogrammetry and Remote Sensing (ISPRS) Commission III Conference, Columbus, Ohio, July 1998.

Page 815: ERDAS Field Guide

Bibliography 785

Kubik, 1982

Kubik, K. 1982. An error theory for the Danish method. Paper presented at International Society for Photogrammetry and Remote Sensing (ISPRS) Commission III Symposium, Helsinki, Finland, June 1982.

Larsen and Marx, 1981

Larsen, R. J., and M. L. Marx. 1981. An Introduction to Mathematical Statistics and Its Applications. Englewood Cliffs, New Jersey: Prentice-Hall, Inc.

Lavreau, 1991

Lavreau, J. 1991. De-Hazing Landsat Thematic Mapper Images. Photogrammetric Engineering & Remote Sensing 57 (10): 1297-1302.

Lee and Walsh, 1984

Lee, J. E., and J. M. Walsh. 1984. Map Projections for Use with the Geographic Information System. U.S. Fish and Wildlife Service, FWS/OBS-84/17.

Lee, 1981

Lee, J. S. 1981. “Speckle Analysis and Smoothing of Synthetic Aperture Radar Images.” Computer Graphics and Image Processing 17 (1): 24-32.

Leick, 1990

Leick, A. 1990. GPS Satellite Surveying. New York, New York: John Wiley & Sons.Lemeshewsky, 1999

Lemeshewsky, George P, “Multispectral multisensor image fusion using wavelet transforms”, in Visual Image Processing VIII, S. K. Park and R. Juday, Ed., Proc SPIE 3716, pp214-222, 1999.

Lemeshewsky, 2002a

Lemeshewsky, George P, personal communication, 2002a.Lemeshewsky, 2002b

Lemeshewsky, George P, “Multispectral Image sharpening Using a Shift-Invariant Wavelet Transform and Adaptive Processing of Multiresolution Edges” in Visual Information Processing XI, Z. Rahman and R.A. Schowengerdt, Eds., Proc SPIE, v. 4736, 2002b.

Li, 1983

Li, D. 1983. Ein Verfahren zur Aufdeckung grober Fehler mit Hilfe der a posteriori-Varianzschätzung. Bildmessung und Luftbildwesen 5.

Li, 1985

———. 1985. Theorie und Untersuchung der Trennbarkeit von groben Paßpunktfehlern und systematischen Bildfehlern bei der photogrammetrischen punktbestimmung. Ph.D. dissertation, Deutsche Geodätische Kommission, Reihe C, No. 324.

Lillesand and Kiefer, 1987

Lillesand, T. M., and R. W. Kiefer. 1987. Remote Sensing and Image Interpretation. New York: John Wiley & Sons, Inc.

Lopes et al, 1990

Lopes, A., E. Nezry, R. Touzi, and H. Laur. 1990. Maximum A Posteriori Speckle Filtering and First Order Textural Models in SAR Images. Paper presented at the International Geoscience and Remote Sensing Symposium (IGARSS), College Park, Maryland, May 1990, 3:2409-2412.

Lü, 1988

Lü, Y. 1988. Interest operator and fast implementation. IASPRS 27 (B2), Kyoto, 1988.Lyon, 1987

Lyon, R. J. P. 1987. Evaluation of AIS-2 Data over Hydrothermally Altered Granitoid Rocks. Proceedings of the Third AIS Data Analysis Workshop. JPL Pub. 87-30:107-119.

Magellan Corporation, 1999

Magellan Corporation. 1999. GLONASS and the GPS+GLONASS Advantage. Retrieved October 25, 1999, from http://www.magellangps.com/geninfo/glonass.htm

Page 816: ERDAS Field Guide

786 Bibliography

Maling, 1992

Maling, D. H. 1992. Coordinate Systems and Map Projections. 2d ed. New York: Pergamon Press. Mallat, 1989

Mallat S.G., "A Theory for Multiresolution Signal Decomposition: The Wavelet Representation", IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 11. No 7., 1989.

Marble, 1990

Marble, D. F. 1990. Geographic Information Systems: An Overview. In Introductory Readings in Geographic Information Systems. Ed. D. J. Peuquet and D. F. Marble. Bristol, Pennsylvania: Taylor & Francis, Inc.

MathResources, 2010

MathResources Inc. 2009. Argand diagram. The MathResource for Windows online. Retrieved January 5, 2010 from http://www.mathresources.com/products/mathresource/maa/argand_diagram.html.

Mayr, 1995

Mayr, W. 1995. Aspects of automatic aerotriangulation. Paper presented at the 45th Photogrammetric Week, Wichmann Verlag, Karlsruhe, September 1995, 225-234.

Mendenhall and Scheaffer, 1973

Mendenhall, W., and R. L. Scheaffer. 1973. Mathematical Statistics with Applications. North Scituate, Massachusetts: Duxbury Press.

Merenyi et al, 1996

Merenyi, E., J. V. Taranik, T. Monor, and W. Farrand. March 1996. Quantitative Comparison of Neural Network and Conventional Classifiers for Hyperspectral Imagery. Paper presented at the Sixth AVIRIS Conference. JPL Pub.

Minnaert and Szeicz, 1961

Minnaert, J. L., and G. Szeicz. 1961. The Reciprocity Principle in Lunar Photometry. Astrophysics Journal 93:403-410.

Nagao and Matsuyama, 1978

Nagao, M., and T. Matsuyama. 1978. Edge Preserving Smoothing. Computer Graphics and Image Processing 9:394-407.

National Aeronautics and Space Administration, 1995a

National Aeronautics and Space Administration (NASA). 1995a. Mission Overview. Retrieved October 2, 1999, from http://southport.jpl.nasa.gov/science/missiono.html

National Aeronautics and Space Administration, 1995b

———. 1995b. Thematic Mapper Simulators (TMS). Retrieved October 2, 1999, from http://geo.arc.nasa.gov/esdstaff/jskiles/top-down/OTTER/OTTER_docs/DAEDALUS.html

National Aeronautics and Space Administration, 1996

———. 1996. SAR Development. Retrieved October 2, 1999, from http://southport.jpl.nasa.gov/reports/iwgsar/3_SAR_Development.html

National Aeronautics and Space Administration, 1997

———. 1997. What is SIR-C/X-SAR? Retrieved October 2, 1999, from http://southport.jpl.nasa.gov/desc/SIRCdesc.html

National Aeronautics and Space Administration, 1998

———. 1998. Landsat 7. Retrieved September 30, 1999, from http://geo.arc.nasa.gov/sge/landsat/landsat.html

National Aeronautics and Space Administration, 1999

———. 1999. An Overview of SeaWiFS and the SeaStar Spacecraft. Retrieved September 30, 1999, from http://seawifs.gsfc.nasa.gov/SEAWIFS/SEASTAR/SPACECRAFT.html

National Aeronautics and Space Administration, 2001

———. 2001. Landsat 7 Mission Specifications. Retrieved December 28, 2001, from http://landsat.gsfc.nasa.gov/project/L7_Specifications.html

Page 817: ERDAS Field Guide

Bibliography 787

National Aeronautics and Space Administration, 2004

———. 2004. ASTER Advanced Spaceborne Thermal Emission and Reflection Radiometer. Retrieved February 22, 2010, from http://asterweb.jpl.nasa.gov/index.asp

National Aeronautics and Space Administration, 2006

———. 2006. SIR-C/X-SAR Flight 1 and 2 Statistics Retrieved January 6, 2009, from http://southport.jpl.nasa.gov/sir-c/html/mission.html

National Aeronautics and Space Administration, 2008

———. 2008. AIRSAR Retrieved January 9, 2009, from http://airsar.jpl.nasa.gov/National Geospatial-Intelligence Agency (NGA), 2010a

National Geospatial-Intelligence Agency (NGA). 2006. DMA Technical Manual (TM 8358.1), Datums, Ellipsoids, Grids, and Grid Reference Systems. Retrieved April 14, 2010, from http://earth-info.nga.mil/GandG/publications/.

National Geospatial-Intelligence Agency (NGA), 2010b

National Geospatial-Intelligence Agency (NGA). 2009. Military Grid Reference System (MGRS). Retrieved June 2, 2010, from http://earth-info.nga.mil/GandG/coordsys/grids/referencesys.html.

National Imagery and Mapping Agency, 1998

National Imagery and Mapping Agency (NIMA). 1998. The National Imagery and Mapping Agency Fact Sheet. Retrieved November 11, 1999, from http://164.214.2.59/general/factsheets/nimafs.html

National Remote Sensing Agency, 1998

National Remote Sensing Agency, Department of Space, Government of India. 1998. Table 3. Specifications of IRS-ID LISS-III camera. Retrieved December 28, 2001 from http://202.54.32.164/interface/inter/v8n4/v8n4t_3.html

National Space Organization, 2008

National Space Organization, 2008. FORMOSAT-2. Retrieved February 22, 2010 from http://www.nspo.org.tw/2008e/projects/project2/intro.htm.

Needham, 1986

Needham, B. H. 1986. Availability of Remotely Sensed Data and Information from the U.S. National Oceanic and Atmospheric Administration’s Satellite Data Services Division. Chapter 9 in Satellite Remote Sensing for Resources Development, edited by Karl-Heinz Szekielda. Gaithersburg, Maryland: Graham & Trotman, Inc.

Oppenheim and Schafer, 1975

Oppenheim, A. V., and R. W. Schafer. 1975. Digital Signal Processing. Englewood Cliffs, New Jersey: Prentice-Hall, Inc.

ORBIMAGE, 1999

ORBIMAGE. 1999. OrbView-3: High-Resolution Imagery in Real-Time. Retrieved October 1, 1999, from http://www.orbimage.com/satellite/orbview3/orbview3.html

ORBIMAGE, 2000

———. 2000. OrbView-3: High-Resolution Imagery in Real-Time. Retrieved December 31, 2000, from http://www.orbimage.com/corp/orbimage_system/ov3/

Orbital Sciences Corporation, 2008

Orbital Sciences Corporation. 2008. OrbView-3. Retrieved December 5, 2008, from http://www.orbital.com/SatellitesSpace/ImagingDefense/OV3/index.shtml

Padwick, et al, 2010

Padwick, C., Deskevich, M., Pacifici, F., and Smallwood, S. 2010. WorldView-2 Pan-Sharpening. Paper presented at 2010 Conference of American Society for Photogrammetry and Remote Sensing, San Diego, CA. April 2010.

Page 818: ERDAS Field Guide

788 Bibliography

Parent and Church, 1987

Parent, P., and R. Church. 1987. Evolution of Geographic Information Systems as Decision Making Tools. Fundamentals of Geographic Information Systems: A Compendium. Ed. W. J. Ripple. Bethesda, Maryland: American Society for Photogrammetry and Remote Sensing and American Congress on Surveying and Mapping.

Pearson, 1990

Pearson, F. 1990. Map Projections: Theory and Applications. Boca Raton, Florida: CRC Press, Inc.Peli and Lim, 1982

Peli, T., and J. S. Lim. 1982. Adaptive Filtering for Image Enhancement. Optical Engineering 21 (1): 108-112.

Pratt, 1991

Pratt, W. K. 1991. Digital Image Processing. 2d ed. New York: John Wiley & Sons, Inc.Press et al, 1988

Press, W. H., B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling. 1988. Numerical Recipes in C. New York, New York: Cambridge University Press.

Prewitt, 1970

Prewitt, J. M. S. 1970. Object Enhancement and Extraction. In Picture Processing and Psychopictorics. Ed. B. S. Lipkin and A. Resenfeld. New York: Academic Press.

RADARSAT, 1999

RADARSAT. 1999. RADARSAT Specifications. Retrieved September 14, 1999 from http://radarsat.space.gc.ca/

RADARSAT-2, 2008

RADARSAT-2, 2008. About RADARSAT-2. MacDonald, Dettwiler and Associates, Ltd. Retrieved December 4, 2008 from http://www.radarsat2.info/about/index.asp

Rado, 1992

Rado, B. Q. 1992. An Historical Analysis of GIS. Mapping Tomorrow’s Resources. Logan, Utah: Utah State University.

RapidEye AG, 2008

RapidEye AG. 2008. RapidEye Standard Image Products. Brandenburg an der Havel, Germany.RapidEye AG, 2009

RapidEye AG. 2009. RapidEye Standard Image Product Specifications. Brandenburg an der Havel, Germany.

Richter, 1990

Richter, R. 1990. A Fast Atmospheric Correction Algorithm Applied to Landsat TM Images. International Journal of Remote Sensing 11 (1): 159-166.

Ritter and Ruth, 1995

Ritter, N., and M. Ruth. 1995. GeoTIFF Format Specification Rev. 1.0. Retrieved October 4, 1999, from http:/www.remotesensing.org/geotiff/spec/geotiffhome.html

Robinson and Sale, 1969

Robinson, A. H., and R. D. Sale. 1969. Elements of Cartography. 3d ed. New York: John Wiley & Sons, Inc.

Rockinger and Fechner, 1998

Rockinger, O., and Fechner, T., “Pixel-Level Image Fusion”, in Signal Processing, Sensor Fusion and Target Recognition, I. Kadar, Ed., Proc SPIE 3374, pp378-388, 1998.

Russian Space Web, 2002

Russian Space Web, 2002. Spacecraft: Almaz-T. Retrieved December 3, 2008 from: http://www.russianspaceweb.com/almazt.html

Page 819: ERDAS Field Guide

Bibliography 789

Sabins, 1987

Sabins, F. F., Jr. 1987. Remote Sensing Principles and Interpretation. 2d ed. New York: W. H. Freeman & Co.

Schenk, 1997

Schenk, T., 1997. Towards automatic aerial triangulation. International Society for Photogrammetry and Remote Sensing (ISPRS) Journal of Photogrammetry and Remote Sensing 52 (3): 110-121.

Schowengerdt, 1980

Schowengerdt, R. A. 1980. Reconstruction of Multispatial, Multispectral Image Data Using Spatial Frequency Content. Photogrammetric Engineering & Remote Sensing 46 (10): 1325-1334.

Schowengerdt, 1983

———. 1983. Techniques for Image Processing and Classification in Remote Sensing. New York: Academic Press.

Schwartz and Soha, 1977

Schwartz, A. A., and J. M. Soha. 1977. Variable Threshold Zonal Filtering. Applied Optics 16 (7).Shah, C. A., 2003

Shah, C. A., 2003. "Independent component analysis mixture model (ICAMM) algorithm for unsupervised classification of multi/hyperspectral imagery," Masters Thesis, Syracuse University, Syracuse, NY, pp. 1-188, 2003.

Shah, et al, 2007

Shah, Chintan A., et al. 2007. Towards The Development of Next Generation Remote Sensing Technology – ERDAS IMAGINE Incorporates a Higher Order Feature Extraction Technique Based on ICA. Published in Proceedings of the ASPRS 2007 Annual Conference, Tampa, Florida, May 7-11, 2007.

Shensa, 1992

Shensa, M., “The discrete wavelet transform”, IEEE Trans Sig Proc, v. 40, n. 10, pp. 2464-2482, 1992.Shikin and Plis, 1995

Shikin, E. V., and A. I. Plis. 1995. Handbook on Splines for the User. Boca Raton: CRC Press, LLC.Simonett et al, 1983

Simonett, D. S., et al. 1983. The Development and Principles of Remote Sensing. Chapter 1 in Manual of Remote Sensing. Ed. R. N. Colwell. Falls Church, Virginia: American Society of Photogrammetry.

Slater, 1980

Slater, P. N. 1980. Remote Sensing: Optics and Optical Systems. Reading, Massachusetts: Addison-Wesley Publishing Company, Inc.

Smith et al, 1980

Smith, J. A., T. L. Lin, and K. J. Ranson. 1980. The Lambertian Assumption and Landsat Data. Photogrammetric Engineering & Remote Sensing 46 (9): 1183-1189.

Snyder, 1987

Snyder, J. P. 1987. Map Projections--A Working Manual. Geological Survey Professional Paper 1395. Washington, DC: United States Government Printing Office.

Snyder and Voxland, 1989

Snyder, J. P., and P. M. Voxland. 1989. An Album of Map Projections. U.S. Geological Survey Professional Paper 1453. Washington, DC: United States Government Printing Office.

Space Imaging, 1998

Space Imaging. 1998. IRS-ID Satellite Imagery Available for Sale Worldwide. Retrieved October 1, 1999, from http://www.spaceimage.com/newsroom/releases/1998/IRS1Dworldwide.html

Space Imaging, 1999a

———. 1999a. IKONOS. Retrieved September 30, 1999, from http://www.spaceimage.com/aboutus/satellites/IKONOS/ikonos.html

Page 820: ERDAS Field Guide

790 Bibliography

Space Imaging, 1999b

———. 1999b. IRS (Indian Remote Sensing Satellite). Retrieved September 17, 1999, from http://www.spaceimage.com/aboutus/satellites/IRS/IRS.html

Space Imaging, 1999c

———. 1999c. RADARSAT. Retrieved September 17, 1999, from http://www.spaceimage.com/aboutus/satellites/RADARSAT/radarsat.htm

SPOT Image, 1998

SPOT Image. 1998. SPOT 4—In Service! Retrieved September 30, 1999 from http://www.spot.com/spot/home/news/press/Commish.htm

SPOT Image, 1999

———. 1999. SPOT System Technical Data. Retrieved September 30, 1999, from http://www.spot.com/spot/home/system/introsat/seltec/seltec.htm

Spot series, 2006

SPOT series. 2006. CNES: Centre National D’Etudes Spatiales. Retrieved December 2, 2008, from http://spot5.cnes.fr/gb/programme/filiere.htm and http://smsc.cnes.fr/SPOT/.

Srinivasan et al, 1988

Srinivasan, R., M. Cannon, and J. White. 1988. Landsat Destriping Using Power Spectral Filtering. Optical Engineering 27 (11): 939-943.

Star and Estes, 1990

Star, J., and J. Estes. 1990. Geographic Information Systems: An Introduction. Englewood Cliffs, New Jersey: Prentice-Hall.

Steinitz et al, 1976

Steinitz, C., P. Parker, and L. E. Jordan, III. 1976. Hand Drawn Overlays: Their History and Perspective Uses. Landscape Architecture 66:444-445.

Stojic et al, 1998

Stojic’, M., J. Chandler, P. Ashmore, and J. Luce. 1998. The assessment of sediment transport rates by automated digital photogrammetry. Photogrammetric Engineering & Remote Sensing 64 (5): 387-395.

Strang et al, 1997

Strang, Gilbert and Nguyen, Truong, Wavelets and Filter Banks, Wellesley-Cambridge Press, 1997.Suits, 1983

Suits, G. H. 1983. The Nature of Electromagnetic Radiation. Chapter 2 in Manual of Remote Sensing. Ed. R. N. Colwell. Falls Church, Virginia: American Society of Photogrammetry.

Swain, 1973

Swain, P. H. 1973. Pattern Recognition: A Basis for Remote Sensing Data Analysis (LARS Information Note 111572). West Lafayette, Indiana: The Laboratory for Applications of Remote Sensing, Purdue University.

Swain and Davis, 1978

Swain, P. H., and S. M. Davis. 1978. Remote Sensing: The Quantitative Approach. New York: McGraw Hill Book Company.

Tang et al, 1997

Tang, L., J. Braun, and R. Debitsch. 1997. Automatic Aerotriangulation - Concept, Realization and Results. Photogrammetry & Remote Sensing 52 (3): 122-131.

Taylor, 1977

Taylor, P. J. 1977. Quantitative Methods in Geography: An Introduction to Spatial Analysis. Boston, Massachusetts: Houghton Mifflin Company.

Telespazio, 2008

Telespazio, 2008. COSMO-SkyMed. Retrieved December 8, 2008 from http://www.telespazio.it/cosmo.html.

Page 821: ERDAS Field Guide

Bibliography 791

Tou and Gonzalez, 1974

Tou, J. T., and R. C. Gonzalez. 1974. Pattern Recognition Principles. Reading, Massachusetts: Addison-Wesley Publishing Company.

Tsingas, 1995

Tsingas, V. 1995. Operational use and empirical results of automatic aerial triangulation. Paper presented at the 45th Photogrammetric Week, Wichmann Verlag, Karlsruhe, September 1995, 207-214.

Tucker, 1979

Tucker, C. J. 1979. Red and Photographic Infrared Linear Combinations for Monitoring Vegetation. Remote Sensing of Environment 8:127-150.

USGS, 1999a

United States Geological Survey (USGS). 1999a. About the EROS Data Center. Retrieved October 25, 1999, from http://edcwww.cr.usgs.gov/content_about.html

United States Geological Survey, 1999b

———. 1999b. Digital Orthophoto Quadrangles. Retrieved October 2, 1999, from http://mapping.usgs.gov/digitalbackyard/doqs.html

United States Geological Survey, 1999c

———. 1999c. What is SDTS? Retrieved October 2, 1999, from http://mcmcweb.er.usgs.gov/sdts/whatsdts.html

United States Geological Survey, n.d.

———. n.d. National Landsat Archive Production System (NLAPS). Retrieved September 30, 1999, from http://edc.usgs.gov/glis/hyper/guide/nlaps.html

United States Geological Survey, 2006

———. 2006. National Elevation Dataset. Retrieved December 5, 2008, from http://ned.usgs.govUnited States Geological Survey, 2006a

———. 2006a. Advanced Very High Resolution Radiometer (AVHRR). Retrieved December 5, 2008, from http://edc.usgs.gov/guides/avhrr.html#avhrr7

United States Geological Survey (USGS) 2008

———. 2008. Image Processing. Retrieved December 4, 2008, from http://landsat.usgs.gov/products_IP_imageprocessing.php

United States Geological Survey (USGS) 2009

———. 2009. Mapping Raster Imagery to the Interrupted Goode Homolosine Projection. Retrieved December 7, 2009, from http://edc2.usgs.gov/1KM/goodesarticle.php

Vosselman and Haala, 1992

Vosselman, G., and N. Haala. 1992. Erkennung topographischer Paßpunkte durch relationale Zuordnung. Zeitschrift für Photogrammetrie und Fernerkundung 60 (6): 170-176.

Walker and Miller, 1990

Walker, T. C., and R. K. Miller. 1990. Geographic Information Systems: An Assessment of Technology, Applications, and Products. Madison, Georgia: SEAI Technical Publications.

Wang, Y., 1988a

Wang, Y. 1988a. A combined adjustment program system for close range photogrammetry. Journal of Wuhan Technical University of Surveying and Mapping 12 (2).

Wang, Y., 1998b

———. 1998b. Principles and applications of structural image matching. International Society for Photogrammetry and Remote Sensing (ISPRS) Journal of Photogrammetry and Remote Sensing 53:154-165.

Wang, Y., 1994

———. 1994. Strukturzuordnung zur automatischen Oberflächenrekonstruktion. Ph.D. dissertation, wissenschaftliche Arbeiten der Fachrichtung Vermessungswesen der Universität Hannover.

Page 822: ERDAS Field Guide

792 Bibliography

Wang, Y., 1995

———. 1995. A New Method for Automatic Relative Orientation of Digital Images. Zeitschrift fuer Photogrammetrie und Fernerkundung (ZPF) 3: 122-130.

Wang, Y. and Yang, X. 1997

Wang, Y. and Yang, X. 1997. Algorithm and Implementation of OrthoBASE Pyramid Layers. ERDAS, Inc. Professional Paper.

Wang, Z., 1990

Wang, Z. 1990. Principles of Photogrammetry (with Remote Sensing). Beijing, China: Press of Wuhan Technical University of Surveying and Mapping, and Publishing House of Surveying and Mapping.

Watson, 1992

Watson, D. 1992. Contouring: A Guide to the Analysis and Display of Spatial Data. Tarrytown, New York: Elsevier Science.

Welch, 1990

Welch, R. 1990. 3-D Terrain Modeling for GIS Applications. GIS World 3 (5): 26-30.Welch and Ehlers, 1987

Welch, R., and W. Ehlers. 1987. Merging Multiresolution SPOT HRV and Landsat TM Data. Photogrammetric Engineering & Remote Sensing 53 (3): 301-303.

Wolf, 1983

Wolf, P. R. 1983. Elements of Photogrammetry. New York: McGraw-Hill, Inc.Yang, 1997

Yang, X. 1997. Georeferencing CAMS Data: Polynomial Rectification and Beyond. Ph. D. dissertation, University of South Carolina.

Yang and Williams, 1997

Yang, X., and D. Williams. 1997. The Effect of DEM Data Uncertainty on the Quality of Orthoimage Generation. Paper presented at Geographic Information Systems/Land Information Systems (GIS/LIS) ‘97, Cincinnati, Ohio, October 1997, 365-371.

Yocky, 1995

Yocky, D. A., “Image merging and data fusion by means of the two-dimensional wavelet transform”, J. Opt. Soc. Amer., v. 12, n. 9, pp 1834-1845, 1995.

Zamudio and Atkinson, 1990

Zamudio, J. A., and W. W. Atkinson. 1990. Analysis of AVIRIS data for Spectral Discrimination of Geologic Materials in the Dolly Varden Mountains. Paper presented at the Second Airborne Visible Infrared Imaging Sepctrometer (AVIRIS) Conference, Pasadena, California, June 1990, Jet Propulsion Laboratories (JPL) Publication 90-54:162-66.

Zhang, 1999

Zhang, Y., “A New Merging Method and its Spectral and Spatial Effects”, Int. J. Rem. Sens., vol. 20, no. 10, pp 2003-2014, 1999.

Related Reading

Battrick, B., and L. Proud, eds. 1992. ERS-1 User Handbook. Noordwijk, The Netherlands: European Space Agency Publications Division, c/o ESTEC.

Billingsley, F. C., et al. 1983. “Data Processing and Reprocessing.” Chapter 17 in Manual of Remote Sensing, edited by Robert N. Colwell. Falls Church, Virginia: American Society of Photogrammetry.

Page 823: ERDAS Field Guide

Bibliography 793

Burrus, C. S., and T. W. Parks. 1985. DFT/FFT and Convolution Algorithms: Theory and Implementation. New York: John Wiley & Sons, Inc.

Carter, J. R. 1989. On Defining the Geographic Information System. Fundamentals of Geographic Information Systems: A Compendium. Ed. W. J. Ripple. Bethesda, Maryland: American Society for Photogrammetric Engineering and Remote Sensing and the American Congress on Surveying and Mapping.

Center for Health Applications of Aerospace Related Technologies (CHAART), The. 1998. Sensor Specifications: IRS-P3. Retrieved December 28, 2001, from http://geo.arc.nasa.gov/sge/health/sensor/sensors/irsp3.html

Dangermond, J. 1989. A Review of Digital Data Commonly Available and Some of the Practical Problems of Entering Them into a GIS. Fundamentals of Geographic Information Systems: A Compendium. Ed. W. J. Ripple. Bethesda, Maryland: American Society for Photogrammetry and Remote Sensing and American Congress on Surveying and Mapping.

Defense Mapping Agency Aerospace Center. 1989. Defense Mapping Agency Product Specifications for ARC Digitized Raster Graphics (ADRG). St. Louis, Missouri: Defense Mapping Agency Aerospace Center.

Duda, R. O., and P. E. Hart. 1973. Pattern Classification and Scene Analysis. New York: John Wiley & Sons, Inc.

Elachi, C. 1992. “Radar Images of the Earth from Space.” Exploring Space.

Elachi, C. 1988. Spaceborne Radar Remote Sensing: Applications and Techniques. New York: Institute of Electrical and Electronics Engineers, Inc. (IEEE) Press.

Elassal, A. A., and V. M. Caruso. 1983. USGS Digital Cartographic Data Standards: Digital Elevation Models. Circular 895-B. Reston, Virginia: U.S. Geological Survey.

Federal Geographic Data Committee (FGDC). 1997. Content Standards for Digital Orthoimagery. Federal Geographic Data Committee, Washington, DC.

Freden, S. C., and F. Gordon, Jr. 1983. Landsat Satellites. Chapter 12 in Manual of Remote Sensing. Ed. R. N. Colwell. Falls Church, Virginia: American Society of Photogrammetry.

Geological Remote Sensing Group. 1992. Geological Remote Sensing Group Newsletter 5. Wallingford, United Kingdom: Institute of Hydrology.

Gonzalez, R. C., and R. E. R. Woods. 1992. Digital Image Processing. Reading, Massachusetts: Addison-Wesley Publishing Company.

Guptill, S. C., ed. 1988. A Process for Evaluating Geographic Information Systems. U.S. Geological Survey Open-File Report 88-105.

Jacobsen, K. 1994. Combined Block Adjustment with Precise Differential GPS Data. International Archives of Photogrammetry and Remote Sensing 30 (B3): 422.

Jordan, L. E., III, B. Q. Rado, and S. L. Sperry. 1992. Meeting the Needs of the GIS and Image Processing Industry in the 1990s. Photogrammetric Engineering & Remote Sensing 58 (8): 1249-1251.

Page 824: ERDAS Field Guide

794 Bibliography

Keates, J. S. 1973. Cartographic Design and Production. London: Longman Group Ltd.

Kennedy, M. 1996. The Global Positioning System and GIS: An Introduction. Chelsea, Michigan: Ann Arbor Press, Inc.

Knuth, D. E. 1987. Digital Halftones by Dot Diffusion. Association for Computing Machinery Transactions on Graphics 6:245-273.

Kraus, K. 1984. Photogrammetrie. Band II. Dümmler Verlag, Bonn.

Lue, Y., and K. Novak. 1991. Recursive Grid - Dynamic Window Matching for Automatic DEM Generation. 1991 ACSM-ASPRS Fall Convention Technical Papers.

Menon, S., P. Gao, and C. Zhan. 1991. GRID: A Data Model and Functional Map Algebra for Raster Geo-processing. Paper presented at Geographic Information Systems/Land Information Systems (GIS/LIS) ‘91, Atlanta, Georgia, October 1991, 2:551-561.

Moffit, F. H., and E. M. Mikhail. 1980. Photogrammetry. 3d ed. New York: Harper& Row Publishers.

Nichols, D., J. Frew et al. 1983. Digital Hardware. Chapter 20 in Manual of Remote Sensing. Ed. R. N. Colwell. Falls Church, Virginia: American Society of Photogrammetry.

Sader, S. A., and J. C. Winne. 1992. RGB-NDVI Colour Composites For Visualizing Forest Change Dynamics. International Journal of Remote Sensing 13 (16): 3055-3067.

Short, N. M., Jr. 1982. The Landsat Tutorial Workbook: Basics of Satellite Remote Sensing. Washington, DC: National Aeronautics and Space Administration.

Space Imaging. 1999. LANDSAT TM. Retrieved September 17, 1999, from http://www.spaceimage.com/aboutus/satellites/Landsat/landsat.html

Stimson, G. W. 1983. Introduction to Airborne Radar. El Segundo, California: Hughes Aircraft Company.

TIFF Developer’s Toolkit. 1990. Seattle, Washington: Aldus Corp.

United States Geological Survey (USGS). 1999. Landsat Thematic Mapper Data. Retrieved September 30, 1999, from http://edc.usgs.gov/glis/hyper/guide/landsat_tm

Wolberg, G. 1990. Digital Image Warping. Los Alamitos, California: Institute of Electrical and Electronics Engineers, Inc. (IEEE) Computer Society Press.

Wolf, P. R. 1980. Definitions of Terms and Symbols used in Photogrammetry. Manual of Photogrammetry. Ed. C. C. Slama. Falls Church, Virginia: American Society of Photogrammetry.

Wong, K. W. 1980. Basic Mathematics of Photogrammetry. Chapter II in Manual of Photogrammetry. Ed. C. C. Slama. Falls Church, Virginia: American Society of Photogrammetry.

Yang, X., R. Robinson, H. Lin, and A. Zusmanis. 1993. Digital Ortho Corrections Using Pre-transformation Distortion Adjustment. 1993 ASPRS Technical Papers 3:425-434.

Page 825: ERDAS Field Guide

Index 795

Index

Symbols.OV1 (overview image) I-120.OVR (overview image) I-120.stk (GRID Stack file) I-132

Numerics1:24,000 scale I-1231:250,000 scale I-1232D affine transformation II-6094 mm tape I-22, I-237.5-minute DEM I-1238 mm tape I-22, I-249-track tape I-22, I-24

Aa priori I-256, II-550Absorption I-7, I-10

spectra I-6, I-11Absorption spectra II-502Accuracy assessment II-589, II-592Accuracy report II-594Active sensor I-6Adaptive filter II-482Additional Parameter modeling (AP) II-641ADRG I-24, I-56, I-111

file naming convention I-115ordering I-129

ADRI I-56, I-116file naming convention I-118ordering I-129

Aerial photography I-110Aerial photos I-4, I-27, II-596Aerial triangulation (AT) II-599, II-617Airborne GPS II-616Airborne imagery I-55Airborne Imaging Spectrometer I-12Airborne Multispectral Scanner Mk2 I-12AIRSAR I-95, I-107Aitoff I-442Albers Conical Equal Area I-303, I-356ALOS I-56ALOS AVNIR-2 sensor I-69ALOS PALSAR sensor I-98ALOS PRISM sensor I-69

ALOS satellite I-68ALOS, JERS-1

ordering I-130Analog photogrammetry II-595Analytical photogrammetry II-595Annotation I-63, I-160, I-167, I-215

element I-215in script models I-203layer I-216

ANT (Erdas 7.x) (annotation) I-63AP II-641Arc Coverage I-56Arc Interchange (vector) I-64ARC system I-111, I-119Arc/second format I-121Arc_Interchange to Coverage I-64Arc_Interchange to Grid I-64ARCGEN I-56, I-64, I-139ArcInfo I-55, I-56, I-132, I-139, I-175, I-180,I-253

coverages I-41data model I-41, I-43UNGENERATE I-139

ArcInfo GENERATE I-51ArcInfo INTERCHANGE I-51ArcView I-51Area based matching II-628Area of interest I-33, I-185, II-473ASCII I-123, I-182ASCII Raster I-56ASCII To Point Annotation (annotation) I-63ASCII To Point Coverage (vector) I-64Aspect I-229, II-645, II-650

calculating II-650equatorial I-230oblique I-230polar I-230transverse I-231

ASTER I-56ASTER sensor I-70AT II-617Atmospheric correction II-473Atmospheric effect II-461Atmospheric modeling II-462Attribute

imported I-46in model I-201information I-41, I-43, I-45raster I-181thematic I-180, I-181

Page 826: ERDAS Field Guide

796 Index

vector I-181, I-183viewing I-182

Auto update I-150, I-151, I-152AutoCAD I-51, I-55, I-139Automatic image correlation II-673Automatic tie point collection II-626Average II-701AVHRR I-15, I-57, I-67, I-257, II-463, II-503

extract I-84full set I-84ordering I-128

AVHRR (Dundee Format) I-57AVHRR (NOAA) I-56AVHRR sensor I-82AVIRIS I-11, I-107, I-108Azimuth I-228Azimuthal Equidistant I-306

BBand I-2, I-88

displaying I-154Banding I-18, II-460

see also stripingBartlett window II-521Basic Image Interchange Format I-62Bayesian classifier II-571Beam mode selection II-668Behrmann I-309BIL I-19, I-57Bin II-470Binary format I-19Binomial interpolation II-632BIP I-19, I-22, I-57Bipolar Oblique Conic Conformal I-429Bit I-19

display device I-146in files (depth) I-17, I-19, I-23

Block of images II-625Block triangulation II-599, II-617Blocking factor I-21, I-23Bonne I-311Border I-221Bpi I-24Brightness inversion II-474Brightness value I-17, I-146, I-147, I-154, I-158, II-464Brovey Transform II-481BSQ I-19, I-20, I-57Buffer zone I-186

Bundle block adjustment II-599, II-616Butterworth window II-521Byte I-19

CCADRG

ordering I-129CADRG (Compressed ADRG) I-57Canadian Geographic Information System I-173Cartesian coordinate I-42Cartography I-211Cartridge tape I-22, I-24Cassini I-313, I-430Cassini-Soldner I-430Categorical

data I-3CCD II-633CD-ROM I-18, I-22, I-24Cell I-123Change detection I-17, I-31, I-66, I-253Chi-square

distribution II-590statistics II-591

CIB I-121CIB (Controlled Image Base) I-57Class I-178, II-545

name I-180, I-181value I-158, I-181

numbering systems I-178, I-190Classical aerial triangulation II-599Classification I-33, I-97, I-179, II-529, II-545

and enhanced data II-491, II-549and rectified data I-254and terrain analysis II-645evaluating II-589flow chart II-575iterative II-548, II-554, II-572scheme II-548

Clump I-186Clustering II-557

ISODATA II-557, II-558RGB II-557, II-562

Coefficient II-710in convolution II-475of variation II-529

Collinearity condition II-614Collinearity equations II-618, II-640Color gun I-146, I-147, I-154

Page 827: ERDAS Field Guide

Index 797

Color scheme I-47, I-180Color table I-158, I-182

for printing I-295Colorcell I-148

read-only I-149Color-infrared I-153Colormap I-147, I-161Complex image I-64Confidence level II-591Conformality I-227Contiguity analysis I-185Contingency matrix II-566, II-568Continuous

data I-3Continuous data

see dataContrast stretch I-32, II-464

for display I-154, II-467, II-704linear II-464, II-465min/max vs. standard deviation I-156, II-

468nonlinear II-465piecewise linear II-466

Contrast table I-154Control point extension II-599Controlled Image Base I-121Convergence value II-621Convolution I-18

cubic I-281filtering I-187, I-281, I-282, II-475kernel

crisp II-479edge detector II-478edge enhancer II-478gradient II-534high frequency II-477, II-478low frequency I-281, II-479Prewitt II-534zero-sum II-478

Convolution kernel II-475high frequency I-282low frequency I-282Prewitt II-534

CoordinateCartesian I-42, I-232conversion I-286file I-4, I-42, I-170geographic I-232, I-342map I-4, I-42, I-251, I-254, I-255planar I-232

reference I-255retransformed I-277source I-255spherical I-232

Coordinate system I-3, II-603ground space II-603image space II-603

Coregistration II-687Correlation calculations II-629Correlation coefficient II-677Correlation threshold I-258Correlation windows II-628Correlator library II-677COSMO-Skymed satellites I-98Covariance II-569, II-581, II-704

sample II-705Covariance matrix II-495, II-565, II-570, II-584, II-705Cross correlation II-629

DDAEDALUS I-57, I-108Data I-174

airborne sensor I-55ancillary II-550categorical I-3complex I-64compression II-492, II-562continuous I-3, I-25, I-153, I-177

displaying I-157creating I-171elevation II-550, II-645enhancement II-456from aircraft I-107geocoded I-21, I-31, I-252gray scale I-168hyperspectral I-11interval I-3nominal I-3ordering I-127ordinal I-3packed I-83pseudo color I-168radar I-55, I-92

applications I-97bands I-95merging II-541

raster I-3, I-160converting to vector I-131

Page 828: ERDAS Field Guide

798 Index

editing I-33formats (BIL, etc.) I-23importing and exporting I-56in GIS I-176sources I-55

ratio I-3satellite I-55structure II-496thematic I-3, I-25, I-158, I-180, II-553

displaying I-160tiled I-28, I-64topographic I-121, II-645

using I-123true color I-168vector I-160, I-168, I-180, I-253, I-287

converting to raster I-131, I-179copying I-45displaying I-47editing I-206

densify I-207generalize I-207reshape I-207spline I-206split I-207unsplit I-207

from raster data I-48importing I-48, I-64in GIS I-176renaming I-45sources I-48, I-51, I-55structure I-43

viewing I-167multiple layers I-168overlapping layers I-168

Data correction I-18, I-33, II-455, II-459geometric I-251, II-460, II-462radiometric I-252, II-460, II-539

Data file value I-1, I-32display I-154, I-170, II-465in classification II-545

Data storage I-18Database

image I-30Decision rule II-547, II-573

Bayesian II-583feature space II-578Mahalanobis distance II-581maximum likelihood II-582, II-589minimum distance II-580, II-582, II-589,

II-591non-parametric II-574

parallelepiped II-575parametric II-574

Decision tree II-587Decorrelation stretch II-496Degrees of freedom II-592DEM I-1, I-27, I-121, I-122, I-252, II-462, II-646

editing I-34interpolation I-34ordering I-128

Density I-24Density of detail II-678Descriptive information

see attribute information I-45Desktop Scanners I-109Desktop scanners II-601Detector I-66, II-460DFAD I-64DGN I-51DGN (Intergraph IGDS) (vector) I-64Differential collection geometry II-686Differential correction I-125DIG Files (Erdas 7.x) (vector) I-64Digital image I-49Digital orthophoto II-642Digital orthophotography II-646Digital photogrammetry II-595Digital picture

see image I-146Digital terrain model (DTM) I-55Digitizing I-49, I-254

GCPs I-256operation modes I-50point mode I-50screen I-48, I-51stream mode I-50tablet I-48

DIME I-141Dimensionality II-492, II-549, II-550, II-709Direction of flight II-600Discrete Cosine Transformation I-133Disk space I-25Display

32-bit I-149DirectColor I-149, I-150, I-156, I-159HiColor I-149, I-152PC I-153PseudoColor I-149, I-150, I-152, I-157, I-

160TrueColor I-149, I-151, I-152, I-156, I-

Page 829: ERDAS Field Guide

Index 799

159Display device I-145, I-154, II-464, II-467,II-498Display memory II-456Display resolution I-145Distance image file II-589, II-590Distribution II-698Distribution Rectangle I-111Dithering I-166

color artifacts I-167color patch I-167

Divergence II-566signature II-569transformed II-570

DLG I-24, I-51, I-55, I-65, I-141Doppler centroid II-662Doppler cone II-663DOQ I-57DOQ (JPEG) I-57DOQs I-110DRRLC I-36DTED I-57, I-117, I-121, II-646DTM I-55DVD I-22DXF I-51, I-55, I-63, I-139DXF to Annotation I-65DXF to Coverage I-65Dynamic range I-16Dynamic Range Run Length Compression I-36

EEarth Centered System II-662Earth Fixed Body coordinate system II-660Earth model II-663Eckert I I-319Eckert II I-321Eckert III I-323Eckert IV I-325Eckert V I-327Eckert VI I-329ECW I-38Edge detection II-478, II-532Edge enhancement II-478Eigenvalue II-492Eigenvector II-492, II-495Eikonix I-109Electric field II-681Electromagnetic radiation I-5, I-66Electromagnetic spectrum I-2, I-5

long wave infrared region I-5short wave infrared region I-5

Electromagnetic wave II-680, II-693Elements of exterior orientation II-611Ellipse II-492, II-566Ellipsoid II-671Enhanced Compressed Wavelet I-38Enhancement I-32, I-79, II-455, II-545

linear I-155, II-464nonlinear II-464on display I-161, I-170, II-456radar data II-456, II-525radiometric II-455, II-463, II-474spatial II-455, II-463, II-474spectral II-455, II-491

Entity (AutoCAD) I-140Envisat I-57Envisat ASAR sensor I-99Envisat satellite I-99EOF (end of file) I-21EOSAT I-18, I-31, I-128EOSAT SOM I-331EOV (end of volume) I-21Ephemeris adjustment II-664Ephemeris coordinate system II-660Ephemeris data II-637Ephemeris modeling II-661equal area

see equivalenceEquidistance I-228Equidistant Conic I-333, I-370Equidistant Cylindrical I-335, I-430Equirectangular I-336Equivalence I-227ER Mapper I-57ERDAS macro language (EML) I-184ERDAS Version 7.X I-26, I-131EROS A/EROS B sensor I-71EROS data National Center I-129Error matrix II-594ERS (Conae-PAF CEOS) I-57ERS (D-PAF CEOS) I-57ERS (I-PAF CEOS) I-57ERS (Tel Aviv-PAF CEOS) I-57ERS (UK-PAF CEOS) I-57ERS-1 I-95ERS-1 satellite I-100ERS-1, ERS-2

ordering I-129ERS-2 satellite I-101

Page 830: ERDAS Field Guide

800 Index

ESRI I-41, I-173ETAK I-51, I-55, I-65, I-141Euclidean distance II-580, II-589, II-710Expected value II-702Expert classification II-585Exposure station II-600Extent I-216Exterior orientation II-611

SPOT II-636

F.fsp.img file II-555False color I-79False easting I-232False northing I-232Fast format I-21Fast Fourier Transform II-511Feature based matching II-631Feature extraction II-455Feature point matching II-631Feature space II-708

area of interest II-551, II-555image II-554, II-708

Fiducial marks II-608Field I-181File

.fsp.img II-555

.gcc I-257

.GIS I-26, I-132

.gmd I-197

.img I-2, I-25, I-153

.LAN I-26, I-131

.mdl I-203

.ovr I-216

.sig II-704archiving I-30header I-23output I-25, I-29, I-202, I-274, I-275

.img II-467classification II-547

pixel I-146tic I-43

File coordinate I-4File name I-29Film recorder I-293Filter

adaptive II-482Fourier image II-517Frost II-526, II-531

Gamma-MAP II-526, II-532high-pass II-519homomorphic II-523Lee II-526, II-529Lee-Sigma II-526local region II-526, II-528low-pass II-518mean II-526, II-527median I-18, II-526, II-527, II-528periodic noise removal II-523Sigma II-529zero-sum II-535

Filtering II-475see also convolution filtering

FIT I-57Flattening II-692Flight path II-600Focal analysis I-18Focal length II-608Focal operation I-33, I-187Focal plane II-608Fourier analysis II-456, II-511Fourier magnitude II-511

calculating II-513Fourier Transform

calculation II-513Editor

window functions II-520filtering II-517high-pass filtering II-519inverse

calculating II-516low-pass filtering II-518neighborhood techniques II-511point techniques II-511

free imagery plugins for GIS and office applica-tions I-39Frequency

statistical II-698Frost filter II-526, II-531Function memory II-456Fuzzy convolution II-584Fuzzy methodology II-584

G.gcc file I-257.GIS file I-26, I-132.gmd file I-197GAC (Global Area Coverage) I-83

Page 831: ERDAS Field Guide

Index 801

Gamma-MAP filter II-526, II-532Gauss Kruger I-339Gaussian distribution II-530Gauss-Krüger I-440GCP I-255, II-711

corresponding I-256digitizing I-256, I-257matching I-257, I-258minimum required I-268prediction I-257, I-258selecting I-256

GCP configuration II-625GCP requirements II-624GCPs II-623General Vertical Near-side Perspective I-340Generic Binary I-57Geocentric coordinate system II-605Geocoded data I-21Geocoding

GeoTIFF I-138GeoEye-1 satellite I-72Geographic Information System

see GISGeoid II-671Geology I-97GeoPDF I-57Georeference I-252, I-289Georeferencing

GeoTIFF I-138GeoTIFF I-58, I-137

geocoding I-138georeferencing I-138

Gigabyte I-19GIS I-1

database I-175defined I-174history I-173

GIS (Erdas 7.x) I-58Glaciology I-97Global operation I-33GLONASS I-124Gnomonic I-344GOME I-101GPS data I-124GPS data applications I-125GPS satellite position I-124Gradient kernel II-534Graphical model I-184, II-456

convert to script I-203create I-197

Graphical modeling I-185, I-195GRASS I-58Graticule I-221, I-232Gray scale II-647Gray values II-642GRD I-58Great circle I-228GRID I-58, I-131, I-132GRID (Surfer

ASCII/Binary) I-58Grid cell I-1, I-146Grid line I-221GRID Stack I-58GRID Stack7x I-58GRID Stacks I-132Ground Control Point

see GCPGround coordinate system II-605Ground space II-603Ground truth II-545, II-551, II-552, II-566Ground truth data I-126Ground-based photographs II-596

HHalftone I-294Hammer I-346Hardcopy I-289Hardware I-145HD II-675Header

file I-21, I-23record I-21

HFA file II-670Hierarchical pyramid technique II-674High density (HD) II-675High Resolution Visible sensors (HRV) II-633Histogram I-180, I-182, II-464, II-562, II-698, II-708, II-709

breakpoint II-468signature II-566, II-573

Histogram equalizationformula II-471

Histogram match II-473Homomorphic filtering II-523Host workstation I-145Hotine I-376HRPT (High Resolution Picture Transmission)I-83HRV II-633

Page 832: ERDAS Field Guide

802 Index

Hue II-498Huffman encoding I-133Hydrology I-97Hyperspectral data I-11Hyperspectral image processing II-455

I.img file I-2, I-25Ideal window II-520IFOV (instantaneous field of view) I-16IGDS I-65IGES I-51, I-55, I-65, I-142IHS to RGB II-500IKONOS satellite I-73Image I-1, I-146, II-455

airborne I-55complex I-64digital I-49microscopic I-55pseudo color I-47radar I-55raster I-49ungeoreferenced I-42

Image algebra II-503, II-549Image Catalog I-29, I-30Image coordinate system II-604Image data I-1Image display I-2, I-145Image file I-1, I-153, II-647

statistics II-698Image Information I-155, I-180, I-252, I-254Image Interpreter I-11, I-18, I-179, I-183, I-184, II-457, II-467, II-481, II-482, II-497, II-539, II-562, II-647, II-650, II-652, II-653

functions II-457Image Metadata dialog I-164Image processing I-1Image pyramid II-631Image scale (SI) II-600, II-643Image space II-603, II-608Image space coordinate system II-604IMAGINE Developers’ Toolkit I-184, I-243, I-297IMAGINE Expert Classifier II-585IMAGINE OrthoRadar algorithm description II-660IMAGINE Radar Interpreter II-525IMAGINE StereoSAR DEM

Coregister

theory II-673Degrade

theory II-672, II-678Despeckle

theory II-672Height

theory II-679Input

theory II-668Match

theory II-673Rescale

theory II-672Subset

theory II-671IMAGINE StereoSAR DEM process flow II-668Import

direct I-56generic I-63

Incidence angles II-637Inclination II-638Inclination angles II-637Independent Component Analysis II-504Index I-185, I-192, II-501

application II-502vegetation I-11

Inertial navigation system I-127INFO I-46, I-140, I-141, I-142, I-143, I-181

path I-46see also ArcInfo

Information (vs. data) I-174INS II-616Intensity II-498Interferometric model II-682Interior orientation II-607

SPOT II-635International Dateline I-232Interpretative photogrammetry II-596Interrupted Goode Homolosine I-348Interrupted Mollweide I-350Interval

classes I-179, I-277data I-3

IRS-1C I-73IRS-1C (EOSAT Fast Format C) I-58IRS-1C (EUROMAP Fast Format C) I-58IRS-1C satellite I-73IRS-1C/1D I-58IRS-1D satellite I-74

Page 833: ERDAS Field Guide

Index 803

JJeffries-Matusita distance II-570JERS-1 satellite I-101JFIF (JPEG) I-58, I-133JPEG 2000 I-58JPEG File Interchange Format I-133

KKappa II-612Kappa coefficient II-594Knowledge Classifier II-588Knowledge Engineer II-586KOMPSAT sensors I-75Kurtosis II-538

L.LAN file I-26, I-131Laborde Oblique Mercator I-432LAC (Local Area Coverage) I-83Lambert Azimuthal Equal Area I-353Lambert Conformal I-398Lambert Conformal Conic I-356, I-359, I-429,I-434Lambertian reflectance model II-653, II-654LAN (Erdas 7.x) I-58Landsat I-10, I-17, I-27, I-49, I-58, I-67, I-395, II-455, II-462, II-482

description I-67, I-76history I-76MSS I-16, I-76, I-77, II-460, II-462, II-

503ordering I-127TM I-9, I-10, I-15, I-21, I-77, I-254, I-

257, II-480, II-500, II-503, II-528,II-543

displaying I-154Landsat 7 I-80

data types I-80Laplacian operator II-534, II-536Latitude/Longitude I-121, I-232, I-252, I-342

rectifying I-276Layer I-2, I-177, I-195Least squares adjustment II-619Least squares condition II-620Least squares correlation II-630Least squares regression I-259, I-264Lee filter II-526, II-529

Lee-Sigma filter II-526Legend I-220Lens distortion II-610Level 1B data I-260Level slice II-472Line I-42, I-47, I-64, I-141Line detection II-532Line dropout I-18, II-461Linear regression II-462Linear transformation I-259, I-264Lines of constant range II-540Local region filter II-526, II-528Long wave infrared region I-5Lookup table I-148, II-464

display II-468Low parallax (LP) II-675Lowtran I-9, II-462Loximuthal I-361LP II-675LPGS processing system I-81

M.mdl file I-203Magnification I-146, I-169Magnitude of parallax II-678Mahalanobis distance II-589Map I-211

accuracy I-240, I-248aspect I-212base I-212bathymetric I-212book I-289cadastral I-212choropleth I-212colors in I-214composite I-212composition I-246contour I-212credit I-223derivative I-212hardcopy I-249index I-212inset I-212isarithmic I-212isopleth I-212label I-223land cover II-545lettering I-225morphometric I-212

Page 834: ERDAS Field Guide

804 Index

outline I-212output to TIFF I-293paneled I-289planimetric I-89, I-212printing I-289

continuous tone I-294with black ink I-296

qualitative I-213quantitative I-213relief I-212scale I-240, I-291scaled I-289, I-294shaded relief I-212slope I-213thematic I-213title I-223topographic I-89, I-213typography I-223viewshed I-213

Map Composer I-211, I-291Map coordinate I-4, I-254, I-255

conversion I-286Map projection I-226, I-251, I-254, I-297

azimuthal I-227, I-229, I-240compromise I-227conformal I-241conical I-227, I-230, I-240cylindrical I-227, I-231, I-240equal area I-241external I-235, I-427gnomonic I-230modified I-231orthographic I-230planar I-227, I-229pseudo I-231see also specific projectionselecting I-240stereographic I-230types I-229units I-236USGS I-232, I-298

MapBasesee ETAK

Mapping I-211Mask I-188Matrix II-712

analysis I-185, I-194contingency II-566, II-568covariance II-495, II-565, II-570, II-584,

II-705

error II-594transformation I-255, I-257, I-274, II-711

Matrix algebraand transformation matrix II-713multiplication II-713notation II-712transposition II-714

Maximum likelihoodclassification decision rule II-569

Mean I-155, II-468, II-567, II-700, II-702, II-704

of ISODATA clusters II-558vector II-570, II-707

Mean Euclidean distance II-537Mean filter II-526, II-527Measurement I-50Measurement vector II-706, II-708, II-712Median filter II-526, II-527, II-528Megabyte I-19Mercator I-231, I-363, I-366, I-376, I-412, I-419, I-430Meridian I-232Metric photogrammetry II-596MGRS I-368Microscopic imagery I-55Microsoft Windows NT I-145, I-152MIF/MID (MapInfo) to Coverage I-65Military Grid Reference System I-368Miller Cylindrical I-366Minimum distance

classification decision rule II-561, II-569Minimum Error Conformal I-433Minnaert constant II-655Model I-195, I-196Model Maker I-184, I-195, II-456

criteria function I-201functions I-198object I-199

data type I-200matrix I-200raster I-199scalar I-200table I-200

working window I-201Modeling I-33, I-194

and image processing I-195and terrain analysis II-645using conditional statements I-201

Modified Polyconic I-434Modified Stereographic I-435

Page 835: ERDAS Field Guide

Index 805

Modified Transverse Mercator I-370MODIS I-58Modtran I-9, II-462Mollweide I-372Mollweide Equal Area I-436Mosaic I-31, I-32, I-253MrSID I-58, I-134MSS Landsat I-58Multiplicative algorithm II-481Multispectral imagery I-67, I-88Multitemporal imagery I-31

NNadir I-67, II-637NASA I-76, I-107NASA ER-2 I-108NASDA CEOS I-59National Center for Earth Resource Observation& Science I-129Natural-color I-153NAVSTAR I-124Nearest neighbor

see resampleNeatline I-221Negative inclination II-637Neighborhood analysis I-185, I-187

boundary I-189density I-189diversity I-189majority I-189maximum I-189mean I-189median I-189minimum I-189minority I-189rank I-189standard deviation I-189sum I-189

New Zealand Map Grid I-374NITFS I-62NLAPS I-59NLAPS processing system I-81NOAA I-82Node I-42

dangling I-210from-node I-42pseudo I-210to-node I-42

Noise reduction II-690

Noise removal II-523Nominal

classes I-178, I-277data I-3

Non-Lambertian reflectance model II-653, II-654Nonlinear transformation I-262, I-265, I-267Normal distribution II-492, II-581, II-582, II-584, II-701, II-704, II-709Normalized correlation coefficient II-677Normalized Difference Vegetation Index (NDVI)I-11, II-503, II-504

O.OV1 (overview image) I-120.OVR (overview image) I-120.ovr file I-216Oblated Equal Area I-375Oblique Mercator I-376, I-398, I-432, I-438Oblique photographs II-596Observation equations II-618Oceanography I-97Off-nadir I-88, II-637Offset I-260Oil exploration I-97Omega II-612Opacity I-169, I-182Optical disk I-22Orbit correction II-670OrbView3 satellite I-84Order

of polynomial II-710of transformation II-711

Ordinalclasses I-179, I-277data I-3

Orientation II-612Orientation angle II-639Orthocorrection I-124, I-252, II-463Orthographic I-379Orthorectification I-252, II-641, II-664Output file I-25, I-202, I-274, I-275

.img II-467classification II-547

Output formation II-665Overlay I-185, I-191Overlay file I-216Overview images I-120

Page 836: ERDAS Field Guide

806 Index

PPanchromatic imagery I-67, I-88Parallax II-676Parallel I-232Parallelepiped

alarm II-566Parameter II-704Parametric II-582, II-584, II-704Passive sensor I-6Pattern recognition II-545, II-566PCX I-59Periodic noise removal II-523Phase II-690, II-692, II-695Phase function II-694Phi II-612Photogrammetric configuration II-617Photogrammetric scanners I-109, II-601Photogrammetry II-595Photograph I-55, I-66, II-482

aerial I-107ordering I-129

Pixel I-1, I-146depth I-145display I-146file vs. display I-146

Pixel coordinate system II-603, II-608Plane table photogrammetry II-595Plate Carrée I-336, I-382, I-430, I-442Point I-42, I-47, I-64, I-141

label I-42Point ID I-256Polar Stereographic I-383, I-434Pollution monitoring I-97Polyconic I-370, I-386, I-434Polygon I-42, I-47, I-64, I-141, I-188, I-206,II-551Polynomial I-263, II-710Positive inclination II-637Posting II-677PostScript I-291Preference Editor I-162Prewitt kernel II-534Primary color I-147, I-295

RGB vs. CMY I-296Principal component band II-493Principal components I-32, II-479, II-481, II-492, II-496, II-549, II-706, II-709

computing II-495Principal point II-608

PrinterPostScript I-291Tektronix Inkjet I-294Tektronix Phaser I-294Tektronix Phaser II SD I-295

Probability II-572, II-583Processing a strip of images II-625Processing one image II-624Profile I-121Projection

externalBipolar Oblique Conic Conformal I-429Cassini-Soldner I-430Laborde Oblique Mercator I-432Modified Polyconic I-434Modified Stereographic I-435Mollweide Equal Area I-436Rectified Skew Orthomorphic I-438Robinson Pseudocylindrical I-439Southern Orientated Gauss Conformal

I-440Winkel’s Tripel I-442

perspective I-408USGS

Alaska Conformal I-301Albers Conical Equal Area I-303Azimuthal Equidistant I-306Bonne I-311Cassini I-313Conic Equidistant I-333Eckert IV I-325Eckert VI I-329Equirectangular (Plate Carrée) I-336Gall Stereographic

Gall Stereographic I-338General Vertical Nearside Perspective

I-340Geographic (Lat/Lon) I-342Gnomonic I-344Hammer I-346Lambert Azimuthal Equal Area I-353Lambert Conformal Conic I-356, I-359Mercator I-363Miller Cylindrical I-366Modified Transverse Mercator I-370Mollweide I-372Oblique Mercator (Hotine) I-376Orthographic I-379Polar Stereographic I-383Polyconic I-386Sinusoidal I-393Space Oblique Mercator I-395State Plane I-398

Page 837: ERDAS Field Guide

Index 807

Stereographic I-317, I-408Transverse Mercator I-412Two Point Equidistant I-414UTM I-416Van der Grinten I I-419

Projection Chooser I-232Proximity analysis I-185Pseudo color I-79

display I-178Pseudo color image I-47Pushbroom scanner I-88Pyramid layer I-29, I-162Pyramid layers II-631Pyramid technique II-674Pythagorean Theorem II-710

QQuartic Authalic I-388Quick tests II-678QuickBird satellite I-85

RRadar I-4, I-18, II-482, II-534, II-536, II-537Radar imagery I-55RADARSAT I-59, I-95, I-102

ordering I-130RADARSAT Beam mode II-678RADARSAT-2 I-59RADARSAT-2 satellite I-103Radial lens distortion II-610Radiative transfer equation I-9RAM I-161Range line II-540Range sphere II-663RapidEye sensor I-86Raster

data I-3Raster Attribute Editor I-159, I-182Raster editing I-33Raster image I-49Raster Product Format I-59, I-119Raster region I-186Ratio

classes I-179, I-277data I-3

Rayleigh scattering I-8Real time differential GPS I-125Recode I-33, I-185, I-190, I-195

Record I-23, I-181logical I-23physical I-23

Rectification I-31, I-251process I-255

Rectified Skew Orthomorphic I-438Reduction I-169Reference coordinate I-255Reference pixel II-592Reference plane II-606Refined ephemeris II-671Reflect I-260, I-261Reflection spectra I-6

see absorption spectraRegistration I-251

vs. rectification I-255Relation based matching II-631Remote sensing I-251Report

generate I-182Resample I-251, I-254, I-274

Bicubic Spline I-275Bilinear Interpolation I-162, I-275, I-277Cubic Convolution I-162, I-275, I-281for display I-161Nearest Neighbor I-162, I-275, I-276, I-

281Residuals I-271Resolution I-14, II-602

display I-145merge II-480radiometric I-14, I-16, I-17, I-76spatial I-15, I-17, I-290, II-473spectral I-15, I-17temporal I-15, I-17

Resolution mergeBrovey Transform II-481Multiplicative II-481Principal Components Merge II-481

Retransformed coordinate I-277RGB monitor I-147RGB to IHS II-542Rhumb line I-228Right hand rule II-605RMS error I-257, I-259, I-271, I-274, II-609

tolerance I-273total I-272

RMSE II-601Roam I-169Robinson I-390

Page 838: ERDAS Field Guide

808 Index

Robinson Pseudocylindrical I-439Root mean square error II-609Root Mean Square Error (RMSE) II-601Rotate I-260Rotation matrix II-612RPF I-119RPF Frame I-120RPF Overview I-120RPF Product I-120RSO I-392Rubber sheeting I-262

S.sig file II-704.stk (GRID Stack file) I-132Sanson-Flamsteed I-393SAR I-93SAR image intersection II-669SAR imaging model II-662Satellite

imagery I-4, I-66, I-92optical I-66radar I-92system I-66

Satellite photogrammetry II-633Satellite scene II-635Saturation II-498Scale I-15, I-216, I-260, I-289

display I-290equivalents I-217large I-15map I-291paper I-291

determining I-292pixels per inch I-218representative fraction I-216small I-15verbal statement I-216

Scale bar I-217Scaled map I-294Scan line II-634Scanner I-66Scanners I-109Scanning I-109, I-254Scanning resolutions II-602Scanning window I-187Scattering I-7

Rayleigh I-8Scatterplot II-492, II-562, II-709

feature space II-554SCBA II-641Screendump command I-135Script model I-183, II-456

data type I-206library I-203statement I-205

Script modeling I-185SDE I-59SDTS I-51, I-135SDTS (raster) I-59SDTS (vector) I-65SDTS profiles I-135SDTS Raster Profile and Extensions I-135Search area II-673, II-676Seasat-1 satellite I-93Seat I-145SeaWiFS L1B and L2A I-59SeaWiFS sensor I-87Secant I-230Seed properties II-552Self-calibrating bundle adjustment (SCBA) II-641Sensor I-66

active I-6, I-94passive I-6, I-94

Separabilitylisting II-571signature II-569

Shaded relief II-645, II-652calculating II-653

Shadowenhancing II-466

Ship monitoring I-97Short wave infrared region I-5SI II-643Sigma filter II-529Sigma notation II-697Signal based matching II-628Signature II-546, II-547, II-550, II-554, II-564

alarm II-566append II-573contingency matrix II-568delete II-573divergence II-566, II-569ellipse II-566, II-567evaluating II-565file II-704histogram II-566, II-573

Page 839: ERDAS Field Guide

Index 809

manipulating II-554, II-572merge II-573non-parametric II-546, II-555, II-565parametric II-546, II-565separability II-569, II-571statistics II-566, II-573transformed divergence II-570

Simple Conic I-333Simple Cylindrical I-336Single frame orthorectification II-598Sinusoidal I-231, I-393SIR-A

ordering I-130SIR-A sensor I-104SIR-A, SIR-B I-95SIR-B

ordering I-130SIR-B sensor I-105SIR-C

ordering I-130SIR-C sensor I-105SIR-C/X-SAR I-105Skew I-260Skewness II-538Slant range II-662SLAR I-93, I-95Slope II-645, II-647

calculating II-647Softcopy photogrammetry II-595Source coordinate I-255Southern Orientated Gauss Conformal I-440Space forward intersection II-615Space Oblique Mercator I-231, I-376, I-395Space Oblique Mercator (Formats A & B) I-397Space resection II-598, II-615Sparse mapping grid II-665Spatial frequency II-474Spatial Modeler I-18, I-64, I-179, I-195, II-456, II-655Spatial Modeler Language I-183, I-195, I-203,II-456Speckle noise I-96, II-526

removing II-526Speckle suppression I-18

local region filter II-528mean filter II-527median filter II-527, II-528Sigma filter II-529

Spectral dimensionality II-706Spectral distance II-569, II-580, II-589, II-

591, II-710in ISODATA clustering II-558

Spectral space II-492, II-495, II-708Spectroscopy I-6Spheroid

Earth I-241non-Earth I-246

SPOT I-10, I-15, I-17, I-18, I-27, I-31, I-49,I-59, I-67, I-117, I-143, I-257, II-455, II-462, II-482

ordering I-127panchromatic I-15, II-480, II-500XS I-88, II-503

displaying I-154SPOT (GeoSpot) I-60SPOT CCRS I-59SPOT exterior orientation II-636SPOT interior orientation II-635SPOT satellites I-87SPOT SICORP MetroView I-60SPOT4 satellite I-89SPOT5 satellite I-90SRPE I-135Standard Beam Mode II-675Standard deviation I-156, II-468, II-567, II-702

sample II-703Standard meridian I-228, I-231Standard parallel I-228, I-230State Plane I-236, I-242, I-357, I-398, I-412Statistics I-28, II-698

signature II-566Step size II-677Stereographic I-383, I-408Stereographic (Extended) I-411Stereoscopic imagery I-89Striping I-18, II-494, II-528Subset I-31Summation II-697Sun angle II-652SUN Raster I-60Sun Raster I-131, I-135Surface generation

weighting function I-35Swath width I-66Swiss Cylindrical I-441Symbol I-222

abstract I-222function I-222plan I-222

Page 840: ERDAS Field Guide

810 Index

profile I-222replicative I-222

Symbolization I-41Symbology I-47Symmetric lens distortion II-610

TTangent I-230Tangential lens distortion II-610Tape I-18Tasseled Cap transformation II-497, II-504Tektronix

Inkjet Printer I-294Phaser II SD I-295Phaser Printer I-294

Template II-673Template size II-675Terramodel I-65TerraSAR-X I-60TerraSAR-X satellite I-106Terrestrial photography II-596, II-606Texture analysis II-537Thematic

data I-3Thematic data

see dataTheme I-177Threshold II-590, II-677Thresholding II-589Thresholding (classification) II-581Tick mark I-221Tie point distribution II-626Tie points II-599, II-626TIFF I-60, I-109, I-131, I-136, I-291, I-293TIGER I-51, I-55, I-65, I-142

disk space requirement I-144Time II-662TM Landsat Acres Fast Format I-60TM Landsat Acres Standard Format I-60TM Landsat EOSAT Fast Format I-60TM Landsat EOSAT Standard Format I-60TM Landsat ESA Fast Format I-60TM Landsat ESA Standard Format I-60TM Landsat IRS Fast Format I-60TM Landsat IRS Standard Format I-60TM Landsat Radarsat Fast Format I-60TM Landsat Radarsat Standard Format I-61TM Landsat-7 Fast-L7A EROS I-60Topocentric coordinate system II-606

Topographic effect II-653Topological Vector Profile I-135Topology I-43, I-207

constructing I-208Total field of view I-66Total RMS error I-272Training II-545

supervised II-545, II-549supervised vs. unsupervised II-549unsupervised II-546, II-549, II-557

Training field II-551Training sample I-253, II-551, II-554, II-593,II-709

defining II-552evaluating II-554

Training sitesee training field

Transformation1st-order I-259linear I-259, I-264matrix I-259, I-262nonlinear I-262, I-265, I-267order I-258

Transformation matrix I-255, I-257, I-259, I-262, I-274, II-711Transposition II-583, II-714Transposition function II-569, II-582, II-583Transverse Mercator I-370, I-398, I-412, I-416, I-430, I-440True color I-79True direction I-228Two Point Equidistant I-414Type style

on maps I-224

UUngeoreferenced image I-42Universal Polar Stereographic I-383Universal Transverse Mercator I-370, I-412, I-416Unwrapped phase II-695Unwrapping II-692USGS DEM I-57USRP I-61UTM

see Universal Transverse Mercator

Page 841: ERDAS Field Guide

811 Index

VV residual matrix II-621Van der Grinten I I-419, I-436Variable I-178

in models I-206Variable rate technology I-126Variance II-538, II-581, II-702, II-703, II-705

sample II-703Vector II-712Vector data

see dataVector layer I-43Vector Quantization I-120Vegetation index I-11, II-503Velocity vector II-639Vertex I-42Viewer I-160, I-185, II-456

dithering I-166Volume I-24

set I-24VPF I-51, I-65VQ I-120

WWagner IV I-421Wagner VII I-423wavelet compression technology I-38Wavelet Resolution Merge II-483Weight factor I-193

classificationseparability II-571

Weighting function (surfacing) I-35Winkel I I-425Winkel II I-425Winkel’s Tripel I-442Workspace I-44WorldView-1 satellite I-90

XX matrix II-620X residual I-271X RMS error I-272X Window I-145XSCAN I-109

YY residual I-271Y RMS error I-272

ZZero-sum filter II-478, II-535Zone I-111, I-119Zone distribution rectangle (ZDR) I-111Zoom I-169

Page 842: ERDAS Field Guide

812 Index