photogrammetry mathematics 080116
DESCRIPTION
PhotogrammetryTRANSCRIPT
Mathematical Foundation of Photogrammetry
(part of EE5358)
Dr. Venkat Devarajan
Ms. Kriti Chauhan
04/20/23 Virtual Environment Lab, UTA 2
Photogrammetry
Formal Definition:
Photogrammetry is the art, science and technology of obtaining reliable information about physical objects and the environment, through processes of recording, measuring, and interpreting photographic images and patterns of recorded radiant electromagnetic energy and other phenomena.
- As given by the American Society for Photogrammetry and Remote Sensing (ASPRS)
Photogrammetry is the science or art of obtaining reliable measurements by means of photographs.
photo = "picture“, grammetry = "measurement“, therefore photogrammetry = “photo-measurement”
Chapter 1
04/20/23 Virtual Environment Lab, UTA 3
Distinct Areas
Metric Photogrammetry Interpretative Photogrammetry
• making precise measurements from photos determine the relative locations of points.
• finding distances, angles, areas, volumes, elevations, and sizes and shapes of objects.
• Most common applications:
1. preparation of planimetric and topographic maps
2. production of digital orthophotos
3. Military intelligence such as targeting
Deals in recognizing and identifying objects and judging their significance through careful and systematic analysis.
Photographic Interpretation
Remote Sensing
(Includes use of multispectral cameras, infrared cameras,
thermal scanners, etc.)
Chapter 1
04/20/23 Virtual Environment Lab, UTA 4
Uses of PhotogrammetryProducts of photogrammetry:
1. Topographic maps: detailed and accurate graphic representation of cultural and natural features on the ground.
2. Orthophotos: Aerial photograph modified so that its scale is uniform throughout.
3. Digital Elevation Maps (DEMs): an array of points in an area that have X, Y and Z coordinates determined.
Current Applications:
1. Land surveying
2. Highway engineering
3. Preparation of tax maps, soil maps, forest maps, geologic maps, maps for city and regional planning and zoning
4. Traffic management and traffic accident investigations
5. Military – digital mosaic, mission planning, rehearsal, targeting etc.
Chapter 1
04/20/23 Virtual Environment Lab, UTA 5
Types of photographsTerrestrialAerial
Vertical
Oblique
Truly Vertical
Tilted
(1deg< angle < 3deg)
High oblique
(includes horizon)
Low oblique
(does not include horizon)
Chapter 1
04/20/23 Virtual Environment Lab, UTA 6
Of all these type of photographs, vertical and low oblique aerial photographs are of most interest to us as they are the ones most extensively used for mapping purposes…
04/20/23 Virtual Environment Lab, UTA 7
Aerial Photography
Vertical aerial photographs are taken along parallel passes called flight strips.
Successive photographs along a flight strip overlap is called end lap – 60%
Area of common coverage called stereoscopic overlap area.
Chapter 1
Overlapping photos called a stereopair.
04/20/23 Virtual Environment Lab, UTA 8Chapter 1
Aerial PhotographyPosition of camera at each exposure is called the exposure station.
Altitude of the camera at exposure time is called the flying height.
Lateral overlapping of adjacent flight strips is called a side lap (usually 30%).
Photographs of 2 or more sidelapping strips used to cover an area is referred to
as a block of photos.
04/20/23 Virtual Environment Lab, UTA 9
Now, lets examine the acquisition devices for these photographs…
04/20/23 Virtual Environment Lab, UTA 10
Camera / Imaging Devices
The general term “imaging devices” is used to describe instruments used for primary photogrammetric data acquisition.
Types of imaging devices (based on how the image is formed):
1. Frame sensors/cameras: acquire entire image simultaneously
2. Strip cameras, Linear array sensors or Pushbroom scanners: sense only a linear projection (strip) of the field of view at a given time and require device to sweep across the “gaming” area to get a 2D image
3. Flying spot scanners or mechanical scanners: detect only a small spot at a time, require movement in two directions (sweep and scan) to form 2D image.
Chapter 3
04/20/23 Virtual Environment Lab, UTA 11
Aerial Mapping Camera
Chapter 3
Aerial mapping cameras are the traditional imaging devices used in traditional photogrammetry.
04/20/23 Virtual Environment Lab, UTA 12
Lets examine terms and characteristics associated with a camera, parameters of a camera, and how to determine them…
04/20/23 Virtual Environment Lab, UTA 13
Focal Plane of Aerial Camera
The focal plane of an aerial camera is the plane in which all incident light rays are brought to focus.
Focal plane is set as exactly as possible at a distance equal to the focal length behind the rear nodal point of the camera lens. In practice, the film emulsion rests on the focal plane.
Chapter 3
Rear nodal point: The emergent nodal point of a thick combination lens. (N’ in the figure)
Note: Principal point is a 2D point on the image plane. It is the intersection of optical axis and image plane.
04/20/23 Virtual Environment Lab, UTA 14
Fiducials in Aerial Camera
Fiducials are 2D control points whose xy coordinates are precisely and accurately determined as a part of camera calibration.
Fiducial marks are situated in the middle of the sides of the focal plane opening or in its corners, or in both locations.
They provide coordinate reference for principal point and image points. Also allow for correction of film distortion (shrinkage and expansion) since each photograph contains images of these stable control points.
Lines joining opposite fiducials intersect at a point called the indicated principal point. Aerial cameras are carefully manufactured so that this occurs very close to the true principal point.
True principal point: Point in the focal plane where a line from the rear nodal point of the camera lens, perpendicular to the focal plane, intersects the focal plane.
Chapter 3
04/20/23 Virtual Environment Lab, UTA 15
Elements of Interior Orientation
Chapter 3
Elements of interior orientation are the parameters needed to determine accurate spatial information from photographs. These are as follows:
1. Calibrated focal length (CFL), the focal length that produces an overall mean distribution of lens distortion.
2. Symmetric radial lens distortion, the symmetric component of distortion that occurs along radial lines from the principal point. Although negligible, theoretically always present.
3. Decentering lens distortion, distortion that remains after compensating for symmetric radial lens distortion. Components: asymmetric radial and tangential lens distortion.
4. Principal point location, specified by coordinates of a principal point given wrt x and y coordinates of the fiducial marks.
5. Fiducial mark coordinates: x and y coordinates which provide the 2D positional reference for the principal point as well as images on the photograph.
The elements of interior orientation are determined through camera calibration.
04/20/23 Virtual Environment Lab, UTA 16
Other Camera Characteristics
Chapter 3
Other camera characteristics that are often of significance are:
1. Resolution for various distances from the principal point (highest near the center, lowest at corners of photo)
2. Focal plane flatness: deviation of platen from a true plane. Measured by a special gauge, generally not more than 0.01mm.
3. Shutter efficiency: ability of shutter to open instantaneously, remain open for the specified exposure duration, and close instantaneously.
04/20/23 Virtual Environment Lab, UTA 17
Camera Calibration: General Approach
Chapter 3
Step 1) Photograph an array of targets whose relative positions are accurately known.
Step 2) Determine elements of interior orientation –
• make precise measurements of target images
• compare actual image locations to positions they should have occupied had the camera produced a perfect perspective view.
This is the approach followed in most methods.
04/20/23 Virtual Environment Lab, UTA 18
After determining interior camera parameters, we consider measurements of image points from images…
04/20/23 Virtual Environment Lab, UTA 19
Photogrammetric Scanners
Chapter 4
Photogrammetric scanners are the devices used to convert the content of photographs from analog form (a continuous-tone image) to digital form (an array of pixels with their gray levels quantified by numerical values).
Coordinate measurement on the acquired digital image can be done either manually, or through automated image-processing algorithms.
Requirements: sufficient geometric and radiometric resolution, and high geometric accuracy.
Geometric/spatial resolution indicates pixel size of resultant image. Smaller the pixel size, greater the detail that can be detected in the image. For high quality photogrammetric scanners, min pixel size is on the order of 5 to 15µm
Radiometric resolution indicates the number of quantization levels. Min should be 256 levels (8 bit); most scanners capable of 1024 levels (10 bit) or higher.
Geometric quality indicates the positional accuracy of pixels in the resultant image. For high quality scanners, it is around 2 to 3 µm.
04/20/23 Virtual Environment Lab, UTA 20
Sources of Error in Photo Coordinates
Chapter 4
The following are some of the sources of error that can distort the true photo coordinates:
1. Film distortions due to shrinkage, expansion and lack of flatness
2. Failure of fiducial axes to intersect at the principal point
3. Lens distortions
4. Atmospheric refraction distortions
5. Earth curvature distortion
6. Operator error in measurements
7. Error made by automated correlation techniques
04/20/23 Virtual Environment Lab, UTA 21
Now that we have covered the basics of image acquisition and measurement, we turn to analytical photogrammetry…
04/20/23 Virtual Environment Lab, UTA 22
Analytical Photogrammetry
Chapter 11
Definition: Analytical photogrammetry is the term used to describe the rigorous mathematical calculation of coordinates of points in object space based upon camera parameters, measured photo coordinates and ground control.
Features of Analytical photogrammetry:
→ rigorously accounts for any tilts
→ generally involves solution of large, complex systems of redundant equations by the method of least squares
→ forms the basis of many modern hardware and software system including stereoplotters, digital terrain model generation, orthophoto production, digital photo rectification and aerotriangulation.
04/20/23 Virtual Environment Lab, UTA 23
Image Measurement Considerations
Before using the x and y photo coordinate pair, the following conditions should be considered:
1. Coordinates (usually in mm) are relative to the principal point - the origin.
2. Analytical photogrammetry is based on assumptions such as “light rays travel in straight lines” and “the focal plane of a frame camera is flat”. Thus, coordinate refinements may be required to compensate for the sources of errors, that violate these assumptions.
3. Measurements must be ensured to have high accuracy.
4. While making measurements of image coordinates of common points that appear in more than one photograph, each object point must be precisely identified between photos so that the measurements are consistent.
5. Object space coordinates is based on a 3D cartesian system.
Chapter 11
04/20/23 Virtual Environment Lab, UTA 24
Now, we come to the most fundamental and useful relationship in analytical photogrammetry, the collinearity condition…
04/20/23 Virtual Environment Lab, UTA 25
Collinearity Condition
Appendix D
The collinearity condition is illustrated in the figure below. The exposure station of a photograph, an object point and its photo image all lie along a straight line. Based on this condition we can develop complex mathematical relationships.
04/20/23 Virtual Environment Lab, UTA 26
Let:
Coordinates of exposure station be XL, YL, ZL wrt object (ground) coordinate system XYZ
Coordinates of object point A be XA, YA, ZA wrt ground coordinate system XYZ
Coordinates of image point a of object point A be xa, ya, za wrt xy photo coordinate system (of which the principal point o is the origin; correction compensation for it is applied later)
Coordinates of image point a be xa’, ya’, za’ in a rotated image plane x’y’z’ which is parallel to the object coordinate system
Transformation of (xa’, ya’, za’) to (xa, ya, za) is accomplished using rotation equations, which we derive next.
Collinearity Condition Equations
Appendix D
04/20/23 Virtual Environment Lab, UTA 27
Rotation Equations
Appendix C
Omega rotation about x’ axis:
New coordinates (x1,y1,z1) of a point (x’,y’,z’) after rotation of the original coordinate reference frame about the x axis by angle ω are given by:
x1 = x’
y1 = y’ cos ω + z’ sin ω
z1 = -y’sin ω + z’ cos ω
Similarly, we obtain equations for phi rotation about y axis:
x2 = -z1sin Ф + x1 cos Ф y2 = y1
z2 = z1 cos Ф + x1 sin Ф
And equations for kappa rotation about z axis:x = x2 cos қ + y2 sin қy = -x2 sin қ + y2 cos қ
z = z2
04/20/23 Virtual Environment Lab, UTA 28
Final Rotation Equations
z
y
x
X
333231
232221
131211
mmm
mmm
mmm
M
We substitute the equations at each stage to get the following:x = m11 x’ + m12 y’ + m13 z’y = m21 x’ + m22 y’ + m23 z’z = m31 x’ + m32 y’ + m33 z’
In matrix form: X = M X’
where
'
'
'
'
z
y
x
X
Properties of rotation matrix M:
1. Sum of squares of the 3 direction cosines (elements of M) in any row or column is unity.
2. M is orthogonal, i.e. M-1 = MT
Where m’s are function of rotation angles ω,Ф and қ
Appendix C
04/20/23 Virtual Environment Lab, UTA 29
Coming back to the collinearity condition…
04/20/23 Virtual Environment Lab, UTA 30
Substitute this into rotation formula:
Now,
factor out za’/(ZA-ZL), divide xa, ya by za
add corrections for offset of principal point (xo,yo)
and equate za=-f, to get:
Collinearity Equations
)()()(
)()()(
333231
131211
LALALA
LALALAoa ZZmYYmXXm
ZZmYYmXXmfxx
)()()(
)()()(
333231
232221
LALALA
LALALAoa ZZmYYmXXm
ZZmYYmXXmfyy
Using property of similar triangles:
Appendix D
'' ;'' ;''
'''
'33
'32
'31
'23
'22
'21
'13
'12
'11
aLA
LAa
LA
LAa
LA
LAa
aLA
LAa
LA
LAa
LA
LAa
aLA
LAa
LA
LAa
LA
LAa
aLA
LAaa
LA
LAaa
LA
LAa
AL
a
LA
a
LA
a
zZZ
ZZmz
ZZ
YYmz
ZZ
XXmz
zZZ
ZZmz
ZZ
YYmz
ZZ
XXmy
zZZ
ZZmz
ZZ
YYmz
ZZ
XXmx
zZZ
ZZzz
ZZ
YYyz
ZZ
XXx
ZZ
z
YY
y
XX
x
04/20/23 Virtual Environment Lab, UTA 31
Review of Collinearity EquationsCollinearity equations:
)()()(
)()()(
333231
131211
LALALA
LALALAoa ZZmYYmXXm
ZZmYYmXXmfxx
)()()(
)()()(
333231
232221
LALALA
LALALAoa ZZmYYmXXm
ZZmYYmXXmfyy
Where,
xa, ya are the photo coordinates of image point a
XA, YA, ZA are object space coordinates of object/ground point A
XL, YL, ZL are object space coordinates of exposure station location
f is the camera focal length
xo, yo are the offsets of the principal point coordinates
m’s are functions of rotation angles omega, phi, kappa (as derived earlier)
Collinearity equations:
• are nonlinear and
• involve 9 unknowns:
1. omega, phi, kappa inherent in the m’s
2. Object coordinates (XA, YA, ZA )
3. Exposure station coordinates (XL, YL, ZL )
Ch. 11 & App D
04/20/23 Virtual Environment Lab, UTA 32
Now that we know about the collinearity condition, lets see where we need to apply it.
First, we need to know what it is that we need to find…
04/20/23 Virtual Environment Lab, UTA 33
As already mentioned, the collinearity conditions involve 9 unknowns:1) Exposure station attitude (omega, phi, kappa),2) Exposure station coordinates (XL, YL, ZL ), and3) Object point coordinates (XA, YA, ZA).
Of these, we first need to compute the position and attitude of the exposure station, also known as the elements of exterior orientation.
Thus the 6 elements of exterior orientation are:1) spatial position (XL, YL, ZL) of the camera and2) angular orientation (omega, phi, kappa) of the camera
All methods to determine elements of exterior orientation of a single tilted photograph, require:
1) photographic images of at least three control points whose X, Y and Z ground coordinates are known, and
2) calibrated focal length of the camera.
Elements of Exterior Orientation
Chapter 10
04/20/23 Virtual Environment Lab, UTA 34
Elements of Interior Orientation
Chapter 3
Elements of interior orientation which can be determined through camera calibration are as follows:
1. Calibrated focal length (CFL), the focal length that produces an overall mean distribution of lens distortion. Better termed calibrated principal distance since it represents the distance from the rear nodal point of the lens to the principal point of the photograph, which is set as close to optical focal length of the lens as possible.
2. Principal point location, specified by coordinates of a principal point given wrt x and y coordinates of the fiducial marks.
3. Fiducial mark coordinates: x and y coordinates of the fiducial marks which provide the 2D positional reference for the principal point as well as images on the photograph.
4. Symmetric radial lens distortion, the symmetric component of distortion that occurs along radial lines from the principal point. Although negligible, theoretically always present.
5. Decentering lens distortion, distortion that remains after compensating for symmetric radial lens distortion. Components: asymmetric radial and tangential lens distortion.
As an aside, from earlier discussion:
04/20/23 Virtual Environment Lab, UTA 35
Next, we look at space resection which is used for determining the camera station coordinates from a single, vertical/low oblique aerial photograph…
04/20/23 Virtual Environment Lab, UTA 36
Space Resection By CollinearitySpace resection by collinearity involves formulating the “collinearity equations” for a
number of control points whose X, Y, and Z ground coordinates are known and whose images appear in the vertical/tilted photo.
The equations are then solved for the six unknown elements of exterior orientation that appear in them.
• 2 equations are formed for each control point
• 3 control points (min) give 6 equations: solution is unique, while 4 or more control points (more than 6 equations) allows a least squares solution (residual terms will exist)
Initial approximations are required for the unknown orientation parameters, since the collinearity equations are nonlinear, and have been linearized using Taylor’s theorem.
Chapter 10 & 11
No. of points No. of equationsUnknown ext. orientation
parameters
1 2 6
2 4 6
3 6 6
4 8 6
04/20/23 Virtual Environment Lab, UTA 37
Coplanarity Condition
A similar condition to the collinearity condition, is coplanarity, which is the condition that the two exposure stations of a stereopair, any object point and its corresponding image points on the two photos, all lie in a common plane.
Like collinearity equations, the coplanarity equation is nonlinear and must be linearized by using Taylor’s theorem. Linearization of the coplanarity equation is somewhat more difficult than that of the collinearity equations.
But, coplanarity is not used nearly as extensively as collinearity in analytical photogrammetry.
Space resection by collinearity is the only method still commonly used to determine the elements of exterior orientation.
04/20/23 Virtual Environment Lab, UTA 38
Initial Approximationsfor Space Resection
• We need initial approximations for all six exterior orientation parameters.
•Omega and Phi angles: For the typical case of near-vertical photography, initial values of omega and phi can be taken as zeros.
• H:
• Altimeter reading for rough calculations
• Compute ZL (height H about datum plane) using ground line of known length appearing on the photograph
• To compute H, only 2 control points are required, rest are redundant. Approximation can be improved by averaging several values of H.
Chapter 11 & 6
04/20/23 Virtual Environment Lab, UTA 39
Calculating Flying Height (H)Flying height H can be calculated using a ground line of known length that appears on the photograph.
Ground line should be on fairly level terrain as difference in elevation of endpoints results in error in computed flying height.
Accurate results can be obtained despite this though, if the images of the end points are approximately equidistant from the principal point of the photograph and on a line through the principal point.
Chapter 6
H can be calculated using equations for scale of a photograph:
S = ab/AB = f/H
(scale of photograph over flat terrain)
Or
S = f/(H-h)
(scale of photograph at any point whose elevation above datum is h)
04/20/23 Virtual Environment Lab, UTA 40
Photographic ScaleS = ab/AB = f/HSAB = ab/AB = La/LA = Lo/LO = f/(H-h)
where1) S is scale of vertical photograph over a flat terrain
2) SAB is scale of vertical photograph over variable terrain
3) ab is distance between images of points A and B on the photograph
4) AB is actual distance between points A and B
5) f is focal length
6) La is distance between exposure station L & image a of point A on the photo positive
7) LA is distance between exposure station L and point A
8) Lo = f is the distance from L to principal point on the photograph
9) LO = H-h is the distance from L to projection of o onto the horizontal plane containing point A with h being height of point A from the datum plane
Note: For vertical photographs taken over variable terrain, there are an infinite number of different scales.
Chapter 6
As an explanation of the equations from which H is calculated:
04/20/23 Virtual Environment Lab, UTA 41
Initial Approx. for XL, YL and
x’ and y’ ground coordinates of any point can be obtained by simply multiplying x and y photo coordinates by the inverse of photo scale at that point.
This requires knowing
• f, H and
• elevation of the object point Z or h.
A 2D conformal coordinate transformation (comprising rotation and translation) can then be performed, which relates these ground coordinates computed from the vertical photo equations to the control values:
X = a.x’ – b.y’ + TX; Y = a.y’ + b.x’ + TY
We know (x,y) and (x’,y’) for n sets are known giving us 2n equations.
The 4 unknown transformation parameters (a, b, TX, TY) can therefore be calculated by least squares. So essentially we are running the resection equations in a diluted mode with initial values of as many parameters as we can find, to calculate the initial parameters of those that cannot be easily estimated.
TX and TY are used as initial approximation for XL and YL, resp.
Rotation angle θ = tan-1(b/a) is used as approximation for κ (kappa).
Chapter 11
04/20/23 Virtual Environment Lab, UTA 42
Space Resection by Collinearity: Summary
Summary of Initializations:
• Omega, Phi -> zero, zero
• Kappa -> Theta
• XL, YL -> TX, TY
• ZL ->flying height H
Summary of steps:
1. Calculate H (ZL)
2. Compute ground coordinates from assumed vertical photo for the control points.
3. Compute 2D conformal coordinate transformation parameters by a least squares solution using the control points (whose coordinates are known in both photo coordinate system and the ground control cood sys)
4. Form linearized observation equations
5. Form and solve normal equations.
6. Add corrections and iterate till corrections become negligible.
(To determine the 6 elements of exterior orientation using collinearity condition)
Chapter 11
04/20/23 Virtual Environment Lab, UTA 43
If space resection is used to determine the elements of exterior orientation for both photos of a stereopair, then object point coordinates for points that lie in the stereo overlap area can be calculated by the procedure known as space intersection…
04/20/23 Virtual Environment Lab, UTA 44
Space Intersection By Collinearity
Chapter 11
For a ground point A:
Collinearity equations are written for image point a1 of the left photo (of the stereopair), and for image point a2 of the right photo, giving 4 equations.
The only unknowns are XA, YA and ZA.
Since equations have been linearized using Taylor’s theorem, initial approximations are required for each point whose object space coordinates are to be computed.
Initial approximations are determined using the parallax equations.
Use: To determine object point coordinates for points that lie in the stereo overlap area of two photographs that make up a stereopair.Principle: Corresponding rays to the same object point from two photos of a stereopair must intersect at the point.
04/20/23 Virtual Environment Lab, UTA 45
Parallax EquationsParallax Equations:
1) pa = xa – x’a
2) hA = H – B.f/pa
3) XA = B.xa/pa
4) YA = B.ya/pa
where
hA is the elevation of point A above datum
H is the flying height above datum
B is the air base (distance between the exposure stations)
f is the focal length of the camera
pa is the parallax of point A
XA and YA are ground coordinates of point A in the coordinate system with origin at the datum point P of the Lpho, X axis is in same vertical plane as x and x’ flight axes and Y axis passes through the datum point of the Lpho and is perpendicular to the X axis
xa and ya are the photo coordinates of point a measured wrt the flight line axes on the left photo
Chapter 8
04/20/23 Virtual Environment Lab, UTA 46
Applying Parallax Equations to Space Intersection
For applying parallax equations, H and B have to be determined:
Since X, Y, Z coordinates for both exposure stations are known,
H is taken as average of ZL1 and ZL2 and
B = [ (XL2-XL1)2 + (YL2-YL1)2 ]1/2
The resulting coordinates from the parallax equations are in the arbitrary ground coordinate system.
To convert them to, for instance WGS84, a conformal coordinate transformation is used.
Chapter 11
04/20/23 Virtual Environment Lab, UTA 47
Now that we know how to determine object space coordinates of a common point in a stereopair, we can examine the overall procedure for all the points in the stereopair...
04/20/23 Virtual Environment Lab, UTA 48
Analytical Stereomodel
Chapter 11
Aerial photographs for most applications are taken so that adjacent photos overlap by more than 50%. Two adjacent photographs that overlap in this manner form a stereopair.
Object points that appear in the overlap area of a stereopair constitute a stereomodel.
The mathematical calculation of 3D ground coordinates of points in the stereomodel by analytical photogrammetric techniques forms an analytical stereomodel.
The process of forming an analytical stereomodel involves 3 primary steps:
1. Interior orientation (also called “photo coordinate refinement”): Mathematically recreates the geometry that existed in the camera when a particular photograph was exposed.
2. Relative (exterior) orientation: Determines the relative angular attitude and positional displacement between the photographs that existed when the photos were taken.
3. Absolute (exterior) orientation: Determines the absolute angular attitude and positions of both photographs.
After these three steps are achieved, points in the analytical stereomodel will have object coordinates in the ground coordinate system.
04/20/23 Virtual Environment Lab, UTA 49Chapter 11
Analytical Relative Orientation
Initialization:
If the parameters are set to the values mentioned (i.e., ω1=Ф1=қ1=XL1=YL1=0, ZL1=f, XL2=b),
Then the scale of the stereomodel is approximately equal to photo scale.
Now, x and y photo coordinates of the left photo are good approximations for X and Y object space coordinates, and
zeros are good approximations for Z object space coordinates.
Analytical relative orientation involves defining (assuming) certain elements of exterior orientation and calculating the remaining ones.
04/20/23 Virtual Environment Lab, UTA 51
Analytical Relative Orientation
Chapter 11
1) All exterior orientation elements, excluding ZL1 of the left photo of the stereopair are set to zero values.
2) For convenience, ZL of left photo (ZL1) is set to f and XL of right photo (XL2) is set to photo base b.
3) This leaves 5 elements of the right photo that must be determined.4) Using collinearity condition, min of 5 object points are required to solve for the
unknowns, since each point used in relative orientation is net gain of one equation for the overall solution (since their X,Y and Z coordinates are unknowns too)
No. of points in overlap No. of equations No. of unknowns
1 4 (2+2) 5 + 3 = 8
2 4 + 4 = 8 8 + 3 = 11
3 8 + 4 = 12 11 + 3 = 14
4 12 + 4 = 16 14 + 3 = 17
5 16 + 4 =20 17 + 3 =20
6 20 + 4 = 24 20 + 3 = 23
04/20/23 Virtual Environment Lab, UTA 52
Analytical Absolute Orientation
Chapter 16 & 11
Stereomodel coordinates of tie points are related to their 3D coordinates in a (real, earth based) ground coordinate system. For small stereomodel such as that computed from one stereopair, analytical absolute orientation can be performed using a 3D conformal coordinate transformation.
Requires minimum of two horizontal and three vertical control points. (20 equations with 8 unknowns plus the 12 exposure station parameters for the two photos:closed form solution). Additional control points provide redundancy, enabling a least squares solution.
(horizontal control: the position of the point in object space is known wrt a horizontal datum; vertical control: the elevation of the point is known wrt a vertical datum) Once the transformation parameters have been computed, they can be applied to the remaining stereomodel points, including the XL, YL and ZL coordinates of the left and right photographs. This gives the coordinates of all stereomodel points in the ground system.
No. of equations No. of additional unknowns Total no. of unknowns
1 horizontal control point 2 per photo =>total 4 1 unknown Z value 12 exterior orientation parameters + 1 = 13
1 vertical control point 2 equations per photo => 4 equations total
2 unknown X and Y values 12 + 2 = 14
2 horizontal control points 4 * 2 = 8 equations 1 * 2 = 2 12 + 2 = 14
3 vertical control points 4 * 3 = 12 equations 2 * 3 = 6 12 + 6 = 18
2 horizontal + 3 vertical control points
8 + 12 = 20 equations 2 + 6 = 8 12 + 8 = 20
04/20/23 Virtual Environment Lab, UTA 53
As already mentioned while covering camera calibration, camera calibration can also be included in a combined interior-relative-absolute orientation. This is known as analytical self-calibration…
04/20/23 Virtual Environment Lab, UTA 54
Analytical Self Calibration
Chapter 11
Analytical self-calibration is a computational process wherein camera calibration
parameters are included in the photogrammetric solution, generally in a
combined interior-relative-absolute orientation.
The process uses collinearity equations that have been augmented with
additional terms to account for adjustment of the calibrated focal length,
principal-point offsets, and symmetric radial and decentering lens distortion.
In addition, the equations might include corrections for atmospheric refraction.
With the inclusion of the extra unknowns, it follows that additional independent
equations will be needed to obtain a solution.
04/20/23 Virtual Environment Lab, UTA 55
So far we have assumed that a certain amount of ground control is available to us for using in space resection, etc. Lets take a look at the acquisition of these ground control points…
04/20/23 Virtual Environment Lab, UTA 56
Ground Controlfor Aerial Photogrammetry
Chapter 16
Ground control consists of any points
• whose positions are known in an object-space coordinate system and
• whose images can be positively identified in the photographs.
Classification of photogrammetric control:1. Horizontal control: the position of the point in object space is known wrt a
horizontal datum2. Vertical control: the elevation of the point is known wrt a vertical datum
Images of acceptable photo control points must satisfy two requirements:
1. They must be sharp, well defined and positively identified on all photos, and
2. They must lie in favorable locations in the photographs
.
04/20/23 Virtual Environment Lab, UTA 57
Photo Control Pointsfor Aerotriangulation
Chapter 16
The Number of ground-surveyed photo control needed varies with1. size, shape and nature of area,2. accuracy required, and 3. procedures, instruments, and personnel to be used.
In general, more dense the ground control, the better the accuracy in the supplemental control determined by aerotriangulation. – thesis of our targeting project!!
There is an optimum number, which affords maximum economic benefit and maintains a satisfactory standard of accuracy.
The methods used for establishing ground control are:
1. Traditional land surveying techniques2. Using Global Positioning System (GPS)
04/20/23 Virtual Environment Lab, UTA 58
Ground Control by GPS
Chapter 16
While GPS is most often used to compute horizontal position, it is capable of determining vertical position (elevation) to nearly the same level of accuracy.
Static GPS can be used to determine coordinates of unknown points with errors at the centimeter level.
Note: The computed vertical position will be related to the ellipsoid, not the geoid or mean sea level. To relate the GPS-derived elevation (ellipsoid height) to the more conventional elevation (orthometric height), a geoid model is necessary.
However, if the ultimate reference frame is related to the ellipsoid, this should not pose a problem.
04/20/23 Virtual Environment Lab, UTA 59
Having covered processing techniques for single points, we examine the process at a higher level, for all the photographs…
04/20/23 Virtual Environment Lab, UTA 60
Aerotriangulation
Chapter 17
• It is the process of determining the X, Y, and Z ground coordinates of individual points based on photo coordinate measurements.
•consists of photo measurement followed by numerical interior, relative, and absolute orientation from which ground coordinates are computed.
• For large projects, the number of control points needed is extensive
• cost can be extremely high
•. Much of this needed control can be established by aerotriangulation for only a sparse network of field surveyed ground control.
• Using GPS in the aircraft to provide coordinates of the camera eliminates the need for ground control entirely
• in practice a small amount of ground control is still used to strengthen the solution.
04/20/23 Virtual Environment Lab, UTA 61
Pass Points for Aerotriangulation
Chapter 17
• selected as 9 points in a format of 3 rows X 3 columns, equally spaced over photo.
• The points may be images of natural, well-defined objects that appear in the required photo areas
• if such points are not available, pass points may be artificially marked.
• Digital image matching can be used to select points in the overlap areas of digital images and automatically match them between adjacent images.
• essential step of “automatic aerotriangulation”.
04/20/23 Virtual Environment Lab, UTA 62
Analytical Aerotriangulation
Chapter 17
The most elementary approach consists of the following basic steps:
1. relative orientation of each stereomodel2. connection of adjacent models to form continuous strips and/or
blocks, and3. simultaneous adjustment of the photos from the strips and/or blocks
to field-surveyed ground control
X and Y coordinates of pass points can be located to an accuracy of 1/15,000 of the flying height, and Z coordinates can be located to an accuracy of 1/10,000 of the flying height.
With specialized equipment and procedures, planimetric accuracy of 1/350,000 of the flying height and vertical accuracy of 1/180,000 have been achieved.
04/20/23 Virtual Environment Lab, UTA 63
Analytical Aerotriangulation Technique
Chapter 17
• Several variations exist.
• Basically, all methods consist of writing equations that express the unknown elements of exterior orientation of each photo in terms of camera constants, measured photo coordinates, and ground coordinates.
• The equations are solved to determine the unknown orientation parameters and either simultaneously or subsequently, coordinates of pass points are calculated.
• By far the most common condition equations used are the collinearity equations.
• Analytical procedures like Bundle Adjustment can simultaneously enforce collinearity condition on to 100s of photographs.
04/20/23 Virtual Environment Lab, UTA 64
Simultaneous Bundle Adjustment
Chapter 17
Adjusting all photogrammetric measurements to ground control values in a single solution is known as a bundle adjustment. The process is so named because of the many light rays that pass through each lens position constituting a bundle of rays.
The bundles from all photos are adjusted simultaneously so that corresponding light rays intersect at positions of the pass points and control points on the ground.
After the normal equations have been formed, they are solved for the unknown corrections to the initial approximations for exterior orientation parameters and object space coordinates.
The corrections are then added to the approximations, and the procedure is repeated until the estimated standard deviation of unit weight converges.
04/20/23 Virtual Environment Lab, UTA 65
Quantities in Bundle Adjustment
Chapter 17
The unknown quantities to be obtained in a bundle adjustment consist of:1. The X, Y and Z object space coordinates of all object points, and2. The exterior orientation parameters of all photographs
The observed quantities (measured) associated with a bundle adjustment are: 1. x and y photo coordinates of images of object points,2. X, Y and/or Z coordinates of ground control points,3. direct observations of exterior orientation parameters of the photographs.
The first group of observations, photo coordinates, is the fundamental photogrammetric measurements.
The next group of observations is coordinates of control points determined through field survey.
The final set of observations can be estimated using airborne GPS control system as well as inertial navigation systems (INSs) which have the capability of measuring the angular attitude of a photograph.
04/20/23 Virtual Environment Lab, UTA 66
Consider a small block consisting of 2 strips with 4 photos per strip, with 20 pass points and 6 control points, totaling 26 object points; with 6 of those also serving as tie points connecting the two adjacent strips.
Bundle Adjustment on a Photo Block
Chapter 17
04/20/23 Virtual Environment Lab, UTA 67
Bundle Adjustment on a Photo Block
Chapter 17
To repeat, consider a small block consisting of 2 strips with 4 photos per strip, with 20 pass points and 6 control points, totaling 26 object points; with 6 of those also serving as tie points connecting the two adjacent strips.
In this case, The number of unknown object coordinates
= no. of object points X no. of coordinates per object point = 26X3 = 78 The number of unknown exterior orientation parameters
= no. of photos X no. of exterior orientation parameters per photo = 8X6 = 48 Total number of unknowns = 78 + 48 = 126
The number of photo coordinate observations= no. of imaged points X no. of photo coordinates per point = 76 X 2 = 152
The number of ground control observations= no. of 3D control points X no. of coordinates per point = 6X3 = 18
The number of exterior orientation parameters= no. of photos X no. of exterior orientation parameters per photo = 8X6 = 48
If all 3 types of observations are included, there will be a total of 152+18+48=218 observations; but if only the first two types are included, there will be only 152+18=170 observations
Thus, regardless of whether exterior orientation parameters were observed, a least squares solution is possible since the number of observations in either case (218 and 170) is greater than the number of unknowns (126 and 78, respectively).
No. of imaged points = 4 X 8
(photos 1, 4, 5 & 8 have 8 imaged points each)
+
4 X 11
(photos 2, 3, 6 & 7 have 11 imaged points each)
= total 76 point images
04/20/23 Virtual Environment Lab, UTA 68
The next question is, how are these equations solved.
Well, we start with observations equations, which would be the collinearity condition equations that we have already seen, we linearize them, and then use least squares procedure to find the unknowns.
We will start by refreshing our memories on least squares solution of over-determined equation set.
04/20/23 Virtual Environment Lab, UTA 69
Relevant Definitions
Appendix A & B
Observations are the directly observed (or measured) quantities which contain random errors.
True Value is the theoretically correct or exact value of a quantity. It can never be determined, because no matter how accurate, the observation will always contain small random errors.
Accuracy is the degree of conformity to the true value.
Since true value of a continuous physical quantity can never be known, accuracy is likewise never known. Therefore, it can only be estimated.
Sometimes, accuracy can be assessed by checking against an independent, higher accuracy standard.
Precision is the degree of refinement of quantity.
The level of precision can be assessed by making repeated measurements and checking the consistency of the values.
If the values are very close to each other, the measurements have high precision and vice versa.
04/20/23 Virtual Environment Lab, UTA 70
Relevant DefinitionsError is the difference between any measured quantity and the true value for that quantity.
m
xMPV
Where Σx is the sum of the individual measurements, and m is
the number of observations.
Appendix A & B
Most probable value is that value for a measured or indirectly determined quantity which, based upon the observations, has the highest probability.
The MPV of a quantity directly and independently measured having observations of equal weight is simply the mean.
Types of errors
Random errors (accidental and compensating)
Systematic errors (cumulative; measured and modeled to compensate)
Mistakes or blunders (avoided as far as possible; detected and eliminated)
04/20/23 Virtual Environment Lab, UTA 71
Relevant DefinitionsResidual is the difference between any measured quantity and the most probable value for that quantity.
It is the value which is dealt with in adjustment computations, since errors are indeterminate. The term error is frequently used when residual is in fact meant.
Degrees of freedom is the number of redundant observations (those in excess of the number actually needed to calculate the unknowns).
Weight is the relative worth of an observation compared to nay other observation.
Measurements are weighted in adjustment computations according to their precisions.
Logically, a precisely measured value should be weighted more in an adjustment so that the correction it receives is smaller than that received by less precise measurements.
If same equipment and procedures are used on a group of measurements, each observation is given an equal weight.
Appendix B
04/20/23 Virtual Environment Lab, UTA 72
Relevant Definitions
Appendix B
Standard deviation (also called “root mean square error” or “68 percent error”) is a quantity used to express the precision of a group of measurements.
For ‘m’ number of direct, equally weighted observations of a quantity, its standard deviation is:
r
vS
2
Where Σv2 is the sum of squares of the residuals and r is the number of degrees of freedom (r=m-1)
According to the theory of probability, 68% of the observations in a group should have residuals smaller than the standard deviation.
The area between –S and +S in a Gaussian distribution curve (also called Normal distribution curve) of the residual, which is same as the area between average-S and average+S on the curve of measurements, is 68%.
04/20/23 Virtual Environment Lab, UTA 73
Fundamental Condition of Least Squares
For a group of equally weighted observations, the fundamental condition which is enforced in least square adjustment is that the sum of the squares of the residuals is minimized.
Suppose a group of ‘m’ equally weighted measurements were taken with residuals v1, v2, v3,…, vm then:
minimum...2 223
22
21
1
m
m
i
vvvviv
Basic assumptions underlying least squares theory:
1. Number of observations is large
2. Frequency distribution of the errors is normal (gaussian)
Appendix B
04/20/23 Virtual Environment Lab, UTA 74
Applying Least SquaresSteps:
1) Write observation equations (one for each measurement) relating measured values to their residual errors and the unknown parameters.
2) Obtain equation for each residual error from corresponding observation.
3) Square and add residuals
4) To minimize Σv2 take partial derivatives wrt each unknown variable and set them equal to zero
5) This gives a set of equations called normal equations which are equal in number to the number of unknowns.
6) Solve normal equations to obtain the most probable values for the unknowns.
Appendix B
04/20/23 Virtual Environment Lab, UTA 75
Least Squares Example Problem
2x 3y
x x y y y
Corresponding Observation Eqns:
x + 3y = 10.1 + v1
2y = 6.2 + v3
x + 2y = 6.9 + v2
2x + y = 4.8 + v4
Let:
AB be a line segment
C divide AB into 2 parts of length X and Y
D be midpoint of AC, i.e. AD = DC = x
E and F trisect CB, i.e. CE = EF = FB = y
In this least squares problem, the coefficients of unknowns in the observation equations are other than zero and unity
4 observation equations (m=2) in 2 variables/unknowns (n=2)
Take Σv2 and differentiate partially w.r.t. the unknowns to get 2 equations in 2 unknowns.
Solution gives the most probable values of x and y.
Note:
If D is not the exact midpoint and E & F do not trisect the into exactly equal parts,
Actual x and y values may differ from segment to segment.
We only get the ‘most probable’ values for x and y!
04/20/23 Virtual Environment Lab, UTA 76
Formulating Equations
2222
24
23
22
21
2
4
3
2
1
4
3
2
1
)8.42()2.62()9.62()1.103(
v
:residuals add and Square 3) Step
8.42
2.62
9.62
1.103
nobservatio ingcorrespond fromerror residualeach for Equation 2) Step
8.42
2.62
9.62
1.103
n)observatioeach for residual a (include
:t)measuremeneach for (one Equationsn Observatio 1) Step
yxyyxyx
vvvv
yxv
yv
yxv
yxv
vyx
vy
vyx
vyx
04/20/23 Virtual Environment Lab, UTA 77
Normal Equations and Solution
0780.3
8424.0
3.61
6.26
187
76
3.61
6.26
187
76
6.122
2.53
3614
1412
:Solving 5) Step
06.1223614
02.531412
:Equations Normal Simplified
0)8.42(22*)2.62(22*)9.62(23*)1.103(2
02*)8.42(20)9.62(2)1.103(2
:Equations Normal
)8.42(22*)2.62(22*)9.62(23*)1.103(2
2*)8.42(20)9.62(2)1.103(2
: of sderivative partial Taking 4) Step
1
2
2
2
y
x
y
x
y
x
yx
yx
yxyyxyx
yxyxyx
yxyyxyxy
v
yxyxyxx
v
v
04/20/23 Virtual Environment Lab, UTA 78
General Form of Observation Equations
111313212111 ... vLXaXaXaXa nn
222323222121 ... vLXaXaXaXa nn
333333232131 ... vLXaXaXaXa nn
‘m’ linear observation equations of equal weight containing ‘n’ unknowns:
For m<n: underdetermined set of equations. For m=n: solution is unique For m>n: m-n observations are redundant, least squares can be applied to find MPVs
Appendix B
mmnmnmmm vLXaXaXaXa ...332211
……………………………………………………
Step 1:
Where: Xj: unknown aij: coefficients of the unknown Xj’s Li: observations vi: residuals
(equations I)
04/20/23 Virtual Environment Lab, UTA 79
General Form of Normal Equations
)()(...)()()( 11
11
3311
2211
1111
ii
m
inini
m
iii
m
iii
m
iii
m
i
LaXaaXaaXaaXaa
)()(...)()()( 21
21
3321
2221
1121
ii
m
inini
m
iii
m
iii
m
iii
m
i
LaXaaXaaXaaXaa
)()(...)()()( 31
31
3331
2231
1131
ii
m
inini
m
iii
m
iii
m
iii
m
i
LaXaaXaaXaaXaa
Equations obtained at the end of Step 4:
At step 1 we have m equations in n variables.
At the end of Step 4 we have n equations in n variables.
)()(...)()()(11
331
221
111
iim
m
ininim
m
iiim
m
iiim
m
iiim
m
i
LaXaaXaaXaaXaa
Appendix B
…………………………………………………………………………
(equations II)
04/20/23 Virtual Environment Lab, UTA 80
Matrix Forms of Equations111 VLXA mmn
nm
)()(
)(1 LAAAX
LAXAATT
TT
mnmmm
n
n
n
nm
aaaa
aaaa
aaaa
aaaa
A
...
...............
...
...
...
321
3333231
2232221
1131211
Equations I (observation equations) in matrix form:
Equations II (normal equations) in matrix form:
where:
n
n
X
X
X
X
X
3
2
1
1
n
n
L
L
L
L
L
3
2
1
1
n
n
v
v
v
v
V
3
2
1
1
Appendix B
04/20/23 Virtual Environment Lab, UTA 81
where,
r is the number of degrees of freedom and equals the number of observation minus the number of unknowns i.e. r = m – n
SXi is the standard deviation of the ith adjusted quantity, i.e., the quantity in the ith row of the X matrix
S0 is the standard deviation of unit weight
QXiXi is the element in the ith row and the ith column of the matrix (ATA)-1 in the unweighted case or the matrix (ATWA)-1
The observation equation in matrix form:
Standard deviation of unit weight for an unweighted adjustment is:
Standard deviations of the adjusted quantities are:
Standard Deviation of residuals
Appendix B
iii XXx
T
QSS
r
VVS
LAXV
0
0
04/20/23 Virtual Environment Lab, UTA 82
Standard Deviations in Example
3492.0 and 2016.0
18* and 6*
187
76
0823.0
0372.0
0440.0
0984.0
0236.0
8.4
2.6
9.6
1.10
0780.3
8424.0
12
20
21
31
8.4
2.6
9.6
1.10
12
20
21
31
00
0
0
4
3
2
1
4
3
2
1
yx
yx
XXx
T
T
SS
SSSS
QSS
AA
r
VVS
v
v
v
v
y
x
v
v
v
v
LAXV
iii
For our example problem, we find the standard deviation of x and y to be:
Sx=0.2016 and Sy=0.3492
04/20/23 Virtual Environment Lab, UTA 83
Linearization of our non-linear equation set Our Least Squares Solution was for a linear
set of equations Remember in all our photogrammetric
equations we have sines, cosines etc. Need to linearize Use Taylor Series Expansion
04/20/23 Virtual Environment Lab, UTA 84
Review of Collinearity EquationsCollinearity equations:
)()()(
)()()(
333231
131211
LALALA
LALALAoa ZZmYYmXXm
ZZmYYmXXmfxx
)()()(
)()()(
333231
232221
LALALA
LALALAoa ZZmYYmXXm
ZZmYYmXXmfyy
Where,
xa, ya are the photo coordinates of image point a
XA, YA, ZA are object space coordinates of object/ground point A
XL, YL, ZL are object space coordinates of exposure station location
f is the camera focal length
xo, yo are the coordinates of the principal point
m’s are functions of rotation angles omega, phi, kappa (as derived earlier)
Collinearity equations:
• are nonlinear and
• involve 9 unknowns:
1. omega, phi, kappa inherent in the m’s
2. Object point coordinates (XA, YA, ZA )
3. Exposure station coordinates (XL, YL, ZL )
Ch. 11 & App D
04/20/23 Virtual Environment Lab, UTA 86
Linearization of Collinearity Equations
)()()( 232221 LALALA ZZmYYmXXms
ao xq
rfxF
ao yq
sfyG
Rewriting the collinearity equations:
)()()( 131211 LALALA ZZmYYmXXmr
)()()( 333231 LALALA ZZmYYmXXmq
where
Applying Taylor’s theorem to these equations (using only upto first order partial derivatives), we get…
Appendix D
04/20/23 Virtual Environment Lab, UTA 87
Linearized Collinearity Equations Terms
etc., , , , ,0000
GGFF
F0, G0: functions of F and G evaluated at the initial approximations for the 9 unknowns;
are partial derivatives of F and G wrt the indicated unknowns evaluated at the initial approximation
etc., , , , ddd are unknown corrections to be applied to the initial approximations. (angles are in radians)
aAA
AA
AA
LL
LL
LL
xdZZ
FdY
Y
FdX
X
FdZ
Z
F
dYY
FdX
X
Fd
Fd
Fd
FF
0000
000000
aAA
AA
AA
LL
LL
LL
ydZZ
GdY
Y
GdX
X
GdZ
Z
G
dYY
GdX
X
Gd
Gd
Gd
GG
0000
000000
where
Appendix D
04/20/23 Virtual Environment Lab, UTA 88
Simplified Linearized Collinearity Equations
axAAA
LLL
vJdZbdYbdXb
dZbdYbdXbdbdbdb
161514
161514131211
Since photo coordinates xa and ya are measured values, if the equations are to be used in a least squares solution, residual terms must be included to make the equations consistent.
The following simplified forms of the linearized collinearity equations include these residuals:
ayAAA
LLL
vKdZbdYbdXb
dZbdYbdXbdbdbdb
262524
262524232221
where J = xa – F0, K = ya - G0 and the b’s are coefficients equal to the partial derivatives
In linearization using Taylor’s series, higher order terms are ignored, hence these equations are approximations.
They are solved iteratively, until the magnitudes of corrections to initial approximations become negligible.
Chapter 11
04/20/23 Virtual Environment Lab, UTA 89
We need to generalize and rewrite the linearized collinearity conditions in matrix form.
While looking at the collinearity condition, we were only concerned with one object space point (point A).
Lets first generalize and then express the equations in matrix form…
04/20/23 Virtual Environment Lab, UTA 90
Generalizing Collinearity EquationsThe observation equations which are the foundation of a bundle adjustment are the
collinearity equations:
)()()(
)()()(
333231
131211
iiiii
iiiiii
LjLjLj
LjLjLjoij ZZmYYmXXm
ZZmYYmXXmfxx
)()()(
)()()(
333231
232221
iLjLjiLji
LjLjiiLjoij ZZmYYmXXm
ZZmYYmXXmfyy
iii
iiii
Where,
xij, yij are the measured photo coordinates of the image of point j on photo i related to the fiducial axis system
Xj, Yj, Zj are coordinates of point j in object space
XLi, YLi, ZLi are the coordinates of the eyepoint of the camera
f is the camera focal length
xo, yo are the coordinates of the principal point
m11i, m12i, ..., m33i are the rotation matrix terms for photo i
These non-linear equations involve 9 unknowns: omega, phi, kappa inherent in the m’s, object point coordinates (Xj, Yj, Zj ) and exposure station coordinates (XLi, YLi, ZLi )
Ch. 11 & App D
04/20/23 Virtual Environment Lab, UTA 91
Linearized Equations in Matrix Formijijjijiij VBB
......
ijijijijij
ijijijijijij
bbbbbb
bbbbbbB
ij
ij
262524232221
161514131211.
ij
x
ijij
ijij
j
j
j
j
L
L
L
i
i
i
iv
vV
K
J
dZ
dY
dX
dZ
dY
dX
d
d
d
ij
i
i
i y
...
ijijij
ijijij
bbb
bbbB ij
262524
161514..
Matrix contains the partial derivatives of the collinearity equations with respect to the exterior orientation parameters of photo i, evaluated at the initial approximations.
Matrix contains the partial derivatives of the collinearity equations with respect to the object space coordinates of point j, evaluated at the initial approximations.
Matrix contains corrections for the initial approximations of the exterior orientation parameters for photo i.
Matrix contains corrections for the initial approximations of the object space coordinates of point j.
Matrix contains measured minus computed x and y photo coordinates for point j on photo i.
Matrix Vij contains residuals for the x and y photo coordinates.
ijB.
ijB..
i
.
j
..
ij
Ch. 17
04/20/23 Virtual Environment Lab, UTA 92
Coming to the actual observations in the observation equations (collinearity conditions), first we consider the photo coordinate observations, then ground control and finally exterior orientation parameters…
04/20/23 Virtual Environment Lab, UTA 93
Weights of Photo Coordinate ObservationsProper weights must be assigned to photo coordinate observations in order to be included in the
bundle adjustment.
Expressed in matrix form, the weights for x and y photo coordinate observations of point j on photo i are:
2
2
ijijxyyx
ijij2y
2x
2o
1
2
2
2
10
01
tosimplifies scoordinate photofor matrix weight thecase, In this
zero. toequal is scoordinate photoin covariance thecases,many in and
1, toequalset becan which parameter arbitrary an is variancereference The
.y with xof covariance theis and ly;respective
,y and in x variancesare and variance;reference theis
ijijijij
ijij
ij
ij
ijijij
ijijij
y
x
ij
yxy
yxx
oij
W
where
W
Ch. 17
04/20/23 Virtual Environment Lab, UTA 94
Even though ground control observation equations are linear, in order to be consistent with the collinearity equations, they will also be approximated by the first-order terms of Taylor’s series:
Ground Control
j
j
j
Zjj
Yjj
Xjj
vZZ
vYY
vXX
00
00
00
jpoint for residuals coordinate theare and ,
jpoint for valuescoordinate measured theare Zand Y, X
jpoint of scoordinateunknown are Zand Y ,X00j
00j
00j
jjj
jjj ZYX vvv
where
......
jjj VC
Observation equations for ground control coordinates are:
j
j
j
Zjjj
Yjjj
Xjjj
vZdZZ
vYdYY
vXdXX
000
000
000
jpoint of
scoordinate for the ionsapproximat the toscorrection are dZ and dY ,dX
jpoint of scoordinate for the ionsapproximat initial theare Zand Y, X
jjj
0j
0j
0j
where
Rearranging the terms and expressing in matrix form:
j
j
j
Z
Y
X
j
jj
jj
jj
j
j
j
j
j
v
v
v
V
ZZ
YY
XX
C
dZ
dY
dX
where
..
000
000
000
....
Ch. 17
04/20/23 Virtual Environment Lab, UTA 95
Weights of Ground Control Observations
1
2
2
2
2..
jjjjj
jjjjj
jjjjj
ZYZXZ
ZYYXY
ZXYXX
ojW
As with photo coordinate measurements, proper weights must be assigned to ground control coordinate observations in order to be included in the bundle adjustment. Expressed in matrix form, the weights for X, Y and Z ground control coordinate observations of point j are:
Ch. 17
j)point for valuescoordinate measured theare Zand Y , (X
with ZX of covariance theis
with ZY of covariance theis
Y with X of covariance theis
lyrespective , Zand Y , Xin variances theare and ,
variancereference theis
00j
00j
00j
00j
00j
00j
00j
00j
00j
00j
00j
00j
222
2
jjjj
jjjj
jjjj
jjj
XZZX
YZZY
XYYX
ZYX
o
where
04/20/23 Virtual Environment Lab, UTA 96
Exterior Orientation Parameters
Y
000000
000000
iLiiiLiiiLii
iii
ZLLYLLXLL
iiiiii
vZZvYvXX
vvv
Ch. 17
The final type of observation consists of measurements of exterior orientation parameters. The form of their observation equations is similar to that of ground control:
The weight matrix for exterior orientation parameters has the following form:
1
2
2
2
2
2
2
.
iLiLiLiLiLiiLiiLiiL
iLiLiLiLiLiiLiiLiiL
iLiLiLiLiLiiLiiLiiL
iLiiLiiLiiiiii
iLiiLiiLiiiiii
iLiiLiiLiiiiii
ZYZXZZZZ
ZYYXYYYY
ZXYXXXXX
ZYX
ZYX
ZYX
iW
04/20/23 Virtual Environment Lab, UTA 97
Now that we have all our observation equations and the observations, the next step in applying least squares, is to form the normal equations…
04/20/23 Virtual Environment Lab, UTA 98
With the observation equations and weights defined as previously, the full set of normal equations may be formed directly.
In matrix form, the full normal equations are: where:
Normal Equations
KN
ns.observatiopoint control groundfor only made arematrix K toonscontributi CW andmatrix N toonscontributi
exist. parametersn orientatioexterior for nsobservatioonly when made arematrix K toonscontributi CW andmatrix N toonscontributi
matrix. zero a be willsubmatrix ingcorrespond i, photoon appear not does jpoint If
subscriptpoint theis j and subscript, photo theis points, ofnumber theisn photos, ofnumber theis
K K N N N
CWK
CWK
CWK
CWK
CWK
CWK
CWK
CWK
K
...000...
..............................
0...00...
0...00...
0...00...
......000
..............................
...0...00
...0...00
...0...00
......
...
..
1
..
1
.....
1
......
1
.
......
3
..
3
..
3
..2
..
2
..
2
..1
..
1
..
1
..
...
3
.
1
.
3
.2
.
1
.
2
.1
.
1
.
1
.
..
3
..2
..1
..
.
3
.2
.1
.
....3
33
33
3321
333
..
3
..3
33
33332313
33
332
..
2
..3
32322212
33
33
331
..
1
..
1312111
3213
.
3
.6
66
66
6
33332316
63
.
3
.6
66
6
22322216
66
62
.
2
.6
6
11312116
66
66
61
.
1
.
jjj
iii
ijij
T
ij
m
ijijij
T
ij
n
jiijij
T
ij
m
ijijij
T
ijijijij
T
ij
n
ji
nnn
mmm
n
m
nnTmn
Tn
Tn
Tn
Tm
TTT
Tm
TTT
Tm
TTT
mnmmm
n
n
n
W
W
im
WBWBBWBBWBBWB
WNNNNN
WNNNNN
WNNNNN
WNNNNN
NNNNWN
NNNNWN
NNNNWN
NNNNWN
N
Ch. 17
04/20/23 Virtual Environment Lab, UTA 99
Now that we have the equations ready to solve, we can solve them with the initial approximations and iterate till the iterated solutions do not change in value.
04/20/23 Virtual Environment Lab, UTA 101
In aerial photography, if GPS is used to determine the coordinates for exposure stations, we can include those in the bundle adjustment and reduce the amount of ground control that is required…
04/20/23 Virtual Environment Lab, UTA 102
Bundle Adjustment with GPS control
Chapter 17
Using GPS in aircraft to estimate coordinates of the exposure stations in the adjustment can greatly reduce the number of ground control points required.
Considerations while using GPS control:
1. Object space coordinates obtained by GPS pertain to the phase center of the antenna but the exposure station is defined as the incident nodal point of the camera lens.
2. The GPS recorder records data at uniform time intervals called epochs (which may be on the order of 1s each), but the camera shutter operates asynchronously wrt the GPS fixes.
3. If a GPS receiver operating in the kinematic mode loses lock on too many satellites, the integer ambiguities must be redetermined.
04/20/23 Virtual Environment Lab, UTA 104
Additional Precautions regarding Airborne GPS
Chapter 17
First, it is recommended that a bundle adjustment with analytical self-calibration be employed when airborne GPS control is used.
Often, due to inadequate modeling of atmospheric refraction distortion, strict enforcement of the calibrated principal distance (focal length) of the camera will cause distortions and excessive residuals in photo coordinates. Use of analytical self-calibration will essentially eliminate that effect.
Second, it is essential that appropriate object space coordinate systems be employed in data reduction.
GPS coordinates in a geocentric coordinate system should be converted to local vertical coordinates for the adjustment. After aerotriangulation is completed, the local vertical coordinates can be converted to whatever system is desired.
04/20/23 Virtual Environment Lab, UTA 105
Though all our discussion so far has been for aerial photography, satellite images can also be used for mapping…
In fact, since the launch of IKONOS, QuickBird, and OrbView-3 satellites, rigorous photogrammetric processing methods similar to those of aerial imagery, such as block adjustment used to solve aerial blocks totaling hundreds or even thousands of images, are routinely being applied to high-resolution satellite image blocks.
04/20/23 Virtual Environment Lab, UTA 106
Aerotriangulation with Satellite Images
Chapter 17 & Gene, Grodecki (2002)
• linear sensor arrays that scan an image strip while the satellite orbits.
• Each scan line of the scene has its own set of exterior orientation parameters, principal point in the center of the line.
• The start position is the projection of the center of row 0 (of an image with m columns and n rows) on the ground.
• Since, the satellite is highly stable during acquisition of the image, the exterior orientation parameters can be assumed to vary in a systematic fashion. • satellite image data providers supply Rational Polynomial Camera (RPC) coefficients. Thus it is possible to block adjust imagery described by an RPC model.
04/20/23 Virtual Environment Lab, UTA 107
Aerotriangulation with Satellite Images
Chapter 17
The exterior orientation parameters vary systematically as functions of the x coordinate:
ωx = ω0 + a1.x; Фx = Ф0 + a2.x; қx = қ0 + a3.x;
XLx =XL0 + a4.x; YLx = YL0 +a5.x; ZLx = ZL0 + a6.x
+ a7.x2
Here, x is the row no. of some image position, ωx, Фx, қx, XLx, YLx, ZLx, are the exterior orientation parameters of the sensor when row x was acquired,ω0, Ф0, қ0, XL0, YL0, ZL0, are the exterior orientation parameters of the sensor at the start position, and a1 through a7 are coefficients which describe the systematic variations of the exterior orientation parameters as the image is acquired.
04/20/23 Virtual Environment Lab, UTA 108
This procedure of aerotriangulation, however, can only be performed at the ground station by the image providers who have access to the physical camera model.
For users wishing to block adjust imagery with their own proprietary ground control, or other reasons, the image providers supply the images with RPCs…
04/20/23 Virtual Environment Lab, UTA 109
Introduction to RPCs• RPC camera model is the ratio of two cubic functions
of latitude, longitude, and height.
• RPC models transform 3D object-space coordinates into 2D image-space coordinates.
• RPC models have traditionally been used for rectification and feature extraction and have recently been extended to block adjustment.
04/20/23 Virtual Environment Lab, UTA 110
Lets look at the formal RPC mathematical model.
We start with defining the domain of the functional model and its normalization, and then go on to define the actual functions…
04/20/23 Virtual Environment Lab, UTA 111
RPC Mathematical ModelSeparate rational functions are used to express the object-space to line and the object-space to sample coordinates relationship.
Assume that (φ,λ,h) are geodetic latitude, longitude and height above WGS84 ellipsoid in degrees, degrees and meters, respectively of a ground point and
(Line, Sample) are denormalized image space coordinates of the corresponding image point
To improve numerical precision, image-space and object-space coordinates are normalized to <-1,+1>
Given the object-space coordinates (φ,λ,h) and the latitude, longitude and height offsets and scale factors, we can normalize latitude, longitude and height:
P = (φ – LAT_OFF) / LAT_SCALEL = (λ – LONG_OFF) / LONG_SCALEH = (h – HEIGHT_OFF) / HEIGHT_SCALE
The normalized line and sample image-space coordinates (Y and X, respectively) are then calculated from their respective rational polynomial functions f(.) and g(.)
04/20/23 Virtual Environment Lab, UTA 112
Definition of RPC CoefficientsY = f(φ,λ,h) = NumL(P,L,H) / DenL(P,L,H) = cTu / dTu
X = g(φ,λ,h) = NumS(P,L,H) / DenS(P,L,H) = eTu / fTu
where,
NumL(P,L,H) = c1 + c2.L + c3.P + c4.H + c5.L.P + c6.L.H + c7.P.H + c8.L2 + c9.P2 + c10.H2 + c11.P.L.H + c12.L3 + c13.L.P2 + c14.L.H2 + c15.L2.P + c16.P3 + c17.P.H2 + c18.L2.H + c19.P2.H + c20.H3
DenL(P,L,H) = 1 + d2.L + d3.P + d4.H + d5.L.P + d6.L.H + d7.P.H + d8.L2 + d9.P2 + d10.H2 + d11.P.L.H + d12.L3 + d13.L.P2 + d14.L.H2 + d15.L2.P + d16.P3 + d17.P.H2 + d18.L2.H + d19.P2.H + d20.H3
NumS(P,L,H) = e1 + e2.L + e3.P + e4.H + e5.L.P + e6.L.H + e7.P.H + e8.L2 + e9.P2 + e10.H2 + e11.P.L.H + e12.L3 + e13.L.P2 + e14.L.H2 + e15.L2.P + e16.P3 + e17.P.H2 + e18.L2.H + e19.P2.H + e20.H3
DenS(P,L,H) = 1 + f2.L + f3.P + f4.H + f5.L.P + f6.L.H + f7.P.H + f8.L2 + f9.P2 + f10.H2 + f11.P.L.H + f12.L3 + f13.L.P2 + f14.L.H2 + f15.L2.P + f16.P3 + f17.P.H2 + f18.L2.H + f19.P2.H + f20.H3
There are 78 rational polynomial coefficients
u = [1 L P H LP LH PH L2 P2 H2 PLH L3 LP2 LH2 L2P P3 PH2 L2H P2H H3]c = [c1 c2 … c20]T; d = [1 d2 … d20]T; e = [e1 e2 … e20]T; f=[1 f2 … f20]T
The denormalized RPC models for image j are given by:
Line = p(φ,λ,h) = f(φ,λ,h) . LINE_SCALE + LINE_OFF
Sample = r(φ,λ,h) = g(φ,λ,h) . SAMPLE_SCALE + SAMPLE_OFF
04/20/23 Virtual Environment Lab, UTA 113
RPC Block Adjustment Model The RPC block adjustment math model proposed is defined in the image space.
It uses denormalized RPC models, p and r, to express the object-space to image-space relationship, and the adjustable functions, Δp and Δr, which are added to the rational functions to capture the discrepancies between the nominal and the measured image-space coordinates.
For each image point ‘i’ on image ‘j’, the RPC block adjustment math model is thus defined as follows:
Linei(j) = Δp(j) + p(j)(φk,λk,hk) + εLi
Samplei(j) = Δr(j) + r(j)(φk,λk,hk) + εSi
where
Linei(j) and Samplei
(j) are measured (on image j) line and sample coordinates of the ith image point, corresponding to the kth ground control or tie point with object space coordinates (φk,λk,hk)
Δp(j) and Δr(j) are the adjustable functions expressing the differences between the measured and the nominal line and sample coordinates of ground control and/or tie points, for image j
(εLi and εSi are random unobservable errors,
p(j) and r(j) are the given line and sample, denormalized RPC models for image j)
04/20/23 Virtual Environment Lab, UTA 114
RPC Block Adjustment Model
The following is a general polynomial model defined in the domain of image coordinates to represent the adjustable functions, Δp and Δr:
Δp = a0 + aS.Sample + aL.Line + aSL.Sample.Line + aL2.Line2 +aS2.Sample2+…
Δr = b0 + bS.Sample + bL.Line + bSL.Sample.Line + bL2.Line2 + bS2.Sample2+…
The following truncated polynomial model defined in the domain of image coordinates to represent the adjustable functions is proposed to be used: Δp = a0 + aS.Sample + aL.Line Δr = b0 + bS.Sample + bL.Line
04/20/23 Virtual Environment Lab, UTA 117
Multiple overlapping images can be block adjusted using the RPC adjustment.
The overlapping images, with RPC models expressing the object-space to image-space relationship for each image, are tied together by tie points
Optionally, the block may also have ground control points with known or approximately known object-space coordinates and measured image positions.
Because there is only one set of observation equations per image point, index “i” uniquely identifies that set.
RPC Block Adjustment Algorithm
04/20/23 Virtual Environment Lab, UTA 118
Thus, observation equations are formed for each image point i.
Measured image-space coordinates for each image point i (Line i(j) and Samplei
(j)) constitute the adjustment model observables, while the image model parameters (a0
(j), aS(j), aL
(j), b0(j), bS
(j), bL(j)) and
the object space coordinates (φk, λk, hk) comprise the unknown adjustment model parameters.
are approximate fixed values for the true image coordinates.
Since true image coordinates are not known, values of the measured image coordinates are used instead.
Effect of using approximate values is negligible because measurements of image coordinates are performed with sub-pixel accuracy.
RPC Block Adjustment Algorithm
0),,()()()( Likkkjjj
iLi hppLineF
For the kth ground control being the ith tie point on the jth image, the RPC block adjustment equations are:
0),,()()()( Sikkkjjj
iSi hrrSampleF
)()()()()(0
)( ..j
ij
L
j
ij
Sjj LineaSampleaap
)()()()()(0
)( ..j
ij
L
j
ij
Sjj LinebSamplebbr
with:
(Observation equations)
)()( and
j
i
ji SampleLine
04/20/23 Virtual Environment Lab, UTA 119
The observation equations can be written as:
Applying Taylor Series expansion to the RPC block adjustment observation equations results in the following linearized model:
where:
And…
RPC Block Adjustment Algorithm
Si
Lii F
FF
Piii
ii
wdFF
dFF
0
00
Pi
kkkjj
ij
L
ji
jS
jji
kkkjj
ij
L
ji
jS
jji
Si
Li
i w
hrLineb
SamplebbSample
hpLinea
SampleaaLine
F
FF
),,(.
.
),,(.
.
0000
00
0000
00
0
0
0
)()()(
)()()(0
)(
)()()(
)()()(0
)(
04/20/23 Virtual Environment Lab, UTA 120
dx = x -x0 is the vector of unknown corrections to the approximate model parameters, x0,
dxA is the sub-vector of the corrections to the approximate image adjustment parameters for n images
dxG is the sub-vector of the corrections to the approximate object space coordinates for m ground control and p tie points
x0 is the vector of the approximate model parameters
ε is a vector of unobservable random errors
RPC Block Adjustment Algorithm
G
AGiAi
G
A
x
TG
Si
x
TA
Si
x
TG
Li
x
TA
Li
xTSi
xTLi
Si
Lii dx
dxAA
dx
dx
x
F
x
F
x
F
x
F
dx
x
F
x
F
dF
dFdF
00
00
0
0
TnL
nS
nnL
nS
nLSLSA
TpmpmpmG
G
A
G
A
dbdbdbdadadadbdbdbdadadadx
dhdddhdddxx
xx
dx
dxdx
)()()(0
)()()(0
)1()1()1(0
)1()1()1(0
1110
...
... ; ;0
0
04/20/23 Virtual Environment Lab, UTA 123
Cw: The a priori covariance matrix of the vector of misclosures, w,
AA: The first-order design matrix for the image adjustment parameters
AG: The first-order design matrix for the object space coordinates
RPC Block Adjustment Algorithm
wdxA
w
w
w
dx
dx
I
I
AA
G
A
P
G
AGA
or,
0
0
G
A
P
w
C
C
C
C
00
00
00
iA
A
A A
A
A
1
0...010000...0
0...000010...0)()(
)()(
ji
ji
ji
ji
ALineSample
LineSampleA
i
As a consequence of the previous reductions, the RPC block adjustment model in matrix form reads
iG
G
G A
A
A
1
0...00...0
0...00...0
000
000
xk
Si
xk
Si
xk
Si
xk
Li
xk
Li
xk
Li
G
h
FFF
h
FFF
Ai
04/20/23 Virtual Environment Lab, UTA 124
RPC Block Adjustment Algorithm
wP is the vector of misclosures for the image-space coordinates,
wPi is the sub-vector of misclosures for the image-space coordinates of the ith image point on the jth image
),,(..
),,(.. w;
000000
000000
i
1
)()()()()()(0
)(
)()()()()()(0
)(
Pkkk
jji
jL
ji
jS
jji
kkkjj
ij
Lj
ij
Sjj
i
P
P
P hrLinebSamplebbSample
hpLineaSampleaaLine
w
w
wi
wA=0 is the vector of misclosures for the image adjustment parameters,
wG=0 is the vector of misclosures for the object space coordinates,
CP is the a priori covariance matrix of image-space coordinates,
CA is the a priori covariance matrix of the image adjustment parameters,
CG is the a priori covariance matrix of object-space coordinates
04/20/23 Virtual Environment Lab, UTA 125
A Priori ConstraintsThis block adjustment model allows the introduction of a priori information using the Bayesian estimation approach, which blurs the distinction between observables and unknowns – both are treated as random quantities.
In the context of least squares , a priori information is introduced in the form of weighted constraints. A priori uncertainty is expressed by CA, CP, and CG.
CA: uncertainty of a priori knowledge of the image adjustment parameters.
In an offset only model, the diagonal elements of CA (the variances of a0 and b0), express the uncertainty of a priori satellite attitude and ephemeris.
CP: prior knowledge of image-space coordinates for ground control and tie points.
Line and sample variances in CP are set according to the accuracy of the image measurement process.
CG: prior knowledge of object-space coordinates for ground control and tie points.
In the absence of any prior knowledge of the object coordinates for tie points, the corresponding entries in CG can be made large (like 10,000m) to produce no significant bias.
One could also remove the weighted constraints for object coordinates of tie points from the observation equations. But being able to introduce prior information for the object coordinates of tie points adds flexibility.
04/20/23 Virtual Environment Lab, UTA 126
Since the math model is non-linear, the least squares solution needs to be iterated until convergence is achieved. At each iteration step, application of the least squares principle results in the following vector of estimated corrections to the approximate values of the model parameters.
At the subsequent iteration step, the vector of approximate model parameters x0 is replaced by the estimated values:
The least squares estimation is repeated until convergence is reached.
The covariance matrix of the estimated model parameters is:
RPC Block Adjustment Algorithm
wCAACAxd wT
wT 111ˆ
xdxx ˆˆ 0
11ˆ
ACAC wT
x
04/20/23 Virtual Environment Lab, UTA 128
Experimental ResultsProject located in Mississippi, with 6 stereo strips and 40 well-distributed GCPs.
Each of the 12 source images was produced as a georectified image with RPCs.
The images were then loaded onto a Socet SET workstation running the RPC block adjustment model.
Multiple well-distributed tie-points were measured along the edges of the images.
Ground points were selectively changed between control and check points to quantify block adjustment accuracy as a function of the number and distribution of GCPs.
The block adjustment results were obtained using a simple two-parameter, offset-only model with a priori values for a0 and b0 of 0 pixels and a priori standard deviation of 10 pixels.
GCP Average Error Longitude (in m)
Average Error Latitude (in m)
Average Error Height (in m)
Standard Deviation Longitude (in m)
Standard Deviation Latitude (in m)
Standard Deviation Height (in m)
None -5.0 6.2 1.6 0.97 1.08 2.02
1 in center -2.0 0.5 -1.1 0.95 1.07 2.02
3 on edge -0.4 0.3 0.2 0.97 1.06 1.96
4 in corners -0.2 0.3 0.0 0.95 1.06 1.95
All 40 GCPs 0.0 0.0 0.0 0.55 0.75 0.50
When all 40 GCPs are used, the ground control overwhelms the tie points and the a priori constraints, thus, effectively adjusting each strip separately such that it minimizes control point errors on that individual strip.
04/20/23 Virtual Environment Lab, UTA 129
RPC - Conclusion• RPC camera model provides a simple, fast and accurate representation of
the Ikonos physical camera model.
• If the a-priori knowledge of exposure station position and angles permits a small angle approximation, then adjustment of the exterior orientation reduces to a simple bias in image space.
• Due to the high accuracy of IKONOS, even without ground control, block adjustment can be accomplished in the image space.
• RPC models are equally applicable to a variety of imaging systems and so could become a standardized representation of their image geometry.
• From simulation and numerical examples, it is seen that this method is as accurate as the ground station block adjustment with the physical camera model.
04/20/23 Virtual Environment Lab, UTA 130
Finally, lets review all the topics that we have covered…
04/20/23 Virtual Environment Lab, UTA 131
SummaryThe mathematical concepts covered today were:
1. Least squares adjustment (formulating observation equations and reducing to normal equations)
2. Collinearity condition equations (derivation and linearization)
3. Space Resection (finding exterior orientation parameters)
4. Space Intersection (finding object space coordinates of common point in stereopair)
5. Analytical Stereomodel (interior, relative and absolute orientation)
6. Ground control for Aerial photogrammetry
7. Aerotriangulation
8. Bundle adjustment (adjusting all photogrammetric measurements to ground control values in a single solution)- conventional and RPC based
04/20/23 Virtual Environment Lab, UTA 132
TermsA lot of the terminology is such that can sometimes cause confusion. For
instance, while pass points and tie points mean the same thing, (ground) control points refer to tie points whose coordinates in the object space/ground control coordinate system are known, while the term check points refers to points that are treated as tie points, but whose actual ground coordinates are very accurately known.
Below are some more terms used in photogrammetry, along with their brief descriptions:
1. stereopair: two adjacent photographs that overlap by more than 50%2. space resection: finding the 6 elements of exterior orientation3. space intersection: finding object point coordinates for points in stereo
overlap
4. stereomodel: object points that appear in the overlap area of a stereopair5. analytical stereopair: 3D ground coordinates of points in stereomodel,
mathematically calculated using analytical photogrammetric techniques
04/20/23 Virtual Environment Lab, UTA 133
6. interior orientation: photo coordinate refinement, including corrections for film distortions, lens distortion, atmospheric refraction, etc.
7. relative orientation: relative angular attitude and positional displacement of two photographs.
8. absolute orientation: exposure station orientations related to a ground based coordinate system.
9. aerotriangulation: determination of X, Y and Z ground coordinates of individual points based on photo measurements.
10.bundle adjustment: adjusting all photogrammetric measurements to ground control values in a single solution
11.horizontal tie points: tie pts whose X and Y coordinates are known.
12.vertical tie points: tie pts whose Z coordinate is known
Terms
04/20/23 Virtual Environment Lab, UTA 135
Software Products AvailableThere is a variety of software solutions available in the market today to perform all the
functionalities that we have seen today. The following is a list of a few of them:
1. ERDAS IMAGINE (http://gi.leica-geosystems.com): ERDAS Imagine photogrammetry suite has all of the basic photogrammetry tools like block adjustment, orthophoto creation, metric and non-metric camera support, and satellite image support for SPOT, Ikonos, and others. It is perhaps one of the most popular photogrammetric tools currently.
2. ESPA (http://www.espasystems.fi): ESPA is a desktop software aimed at digital aerial photogrammetry and airborne Lidar processing.
3. Geomatica (http://www.pcigeomatics.com/geomatica/demo.html): PCI Geomatics’ Geomatica that offers a single integrated environment for remote sensing, GIS, photogrammetry, cartography, web and development tools. A demo version of the software is also available at their website.
4. Image Station (http://www.intergraph.com): Intergraph’s Z/I Imaging ImageStation comprises modules like Photogrammetric Manager, Model Setup, Digital Mensuration, Automatic Triangulation, Stereo Display, Feature Collection, DTM Collection, Automatic Elevations, ImageStation Base Rectifier, OrthoPro, PixelQue, Image Viewer, Image Analyst.
5. INPHO (http://www.inpho.de): INPHO is an end-to-end photogrammetric systems supplier. INPHO’s portfolio covers the entire workflow of photogrammetric projects, including aerial triangulation, stereo compilation, terrain modeling, orthophoto production and image capture.
6. iWitness (http://www.iwitnessphoto.com): iWitness from DeChant Consulting Services is a close-range photogrammetry software system that has been developed for accident reconstruction and forensic measurement.
04/20/23 Virtual Environment Lab, UTA 136
Software Products Available7. (Aerosys) OEM Pak (http://www.aerogeomatics.com/aerosys/products.html): This free package
from Aerosys offers the exact same features as its Pro Version, except that the bundle adjustment is limited to a maximum of 15 photos.
8. PHOTOMOD (http://www.racurs.ru/?page=94): PHOTOMOD, a software family from Racurs, Russia, comprises of products for photogrammetric processing of remote sensing data which allow to extract geometrically accurate spatial information from almost any commercially available type of imagery.
9. PhotoModeler (http://www.photomodeler.com/downloads/default.htm): PhotoModeler, the software program from Eos Systems, allows you to create 3D models and measurements from photographs with export capabilities to 3D Studio 3DS, Wavefront OBJ, OpenNURBS/Rhino, RAW, Maya Script format, and Google Earth’s KML and KMZ, etc.
10. SOCET SET (http://www.socetgxp.com): This is a digital photogrammetry software application from BAE Systems. SOCET SET works with the latest airborne digital sensors and includes innovative point-matching algorithms for multi-sensor triangulation. SOCET-SET used to be the standard by which all other photogrammetry packages were measured against.
11. SUMMIT EVOLUTION (http://www.datem.com/support/download.html):Summit Evolution is the digital photogrammetric workstation from DAT/EM, released in April 2001 at the ASPRS Conference. The features of the software include subpixel functionality, support for different orientation methods and various formats.
12. Vr Mapping Software (http://www.cardinalsystems.net): Vr Mapping Software Suite includes modules for 2D/3D collection and editing, stereo softcopy,orthophoto rectification, aerial triangulation, bundle adjustment, ortho mosaicing, volume computation, etc.
04/20/23 Virtual Environment Lab, UTA 137
Open Source Software SolutionsThere are three separate modules for relative orientation (relor.exe), space resection (resect.exe) and 3D conformal coordinate transformation (3DCONF.exe) available at: http://www.surv.ufl.edu/wolfdewitt/download.html
Another open source program is DGAP, a program for General Analytical Positioning that can be found at:
http://www.ifp.uni-stuttgart.de/publications/software/openbundle/index.en.html
04/20/23 Virtual Environment Lab, UTA 138
References
1. Wolf, Dewitt: “Elements of Photogrammetry”, McGraw Hill, 2000
2. Dial, Grodecki: “Block Adjustment with Rational Polynomial Camera Models”, ACSM-ASPRS 2002 Annual Conference Proceedings, 2002
3. Grodecki, Dial: “Block Adjustment of High-Resolution Satellite Images described by Rational Polynomials”, PE&RS Jan 2003
4. Wikipedia
5. Other online resources
6. Software reviews from: http://www.gisdevelopment.net/downloads/photo/index.htm and http://www.gisvisionmag.com/vision.php?article=200202%2Freview.html