auto-align manuscript supplement final
TRANSCRIPT
-
7/26/2019 Auto-Align Manuscript Supplement Final
1/9
Auto-align Image Co-registration Supplementary Information
1 Algorithm structures
Algorithm 1
rotational offset
This algorithm is designed to determine the difference in
angle between the same feature in 2 different images, with
respect to the vertical, due to the difference in orientation of
the detection devices used for their acquisition.
If the data in the 2 dimensional image, , is duplicated,rotated by an angle, , and scaled by a factor, , we get asecond image: . The Fourier transform of theoriginal image is related to the Fourier transform of the
scaled-rotated image by Fourier scale and Fourier rotation
theory. These state: A rotation in the spatial domain will
result in the same rotation the frequency domain; A scale in
the spatial domain will in appear as a reciprocal scale in the
frequency domain (1, 2). These properties form basis theory
in the design of this optimisation type algorithm. Phase
correlation between two images, , and , isdefined by:
(1)
Where: corresponds to the Fourier transform of animage, , corresponds to the inverse Fourier transform,and denotes the complex conjugate of the Fouriertransform of an image. Fast Fourier Transform (FFT)algorithms are used to calculate the Fourier transform of
discrete image data. As the sole objective of this algorithm is
to calculate rotation, we are only interested in the value of
the maximum element of but not its position (phasecorrelation is commonly used to calculate the translation
between 2 images). The similarity metric for two images
used in this algorithm is defined by:
(2)
(a) Initialisation
To determine the rotation accurately we require precise
knowledge of the image pixel scales. Calibrated microscopes
provide a pixel scale (the physical length each pixel
represents) with their images so the ratio of these can be
used as a corrective scale factor. However, the calibration
values will each carry an error and therefore this scale factor
can only be used as an approximation. If, , and, , arethe pixel calibration values for Confocal and TIRF
microscopes images, respectively, then the ratio gives a first
estimate for the difference in scale for the two images. Each
value will carry some measurement error therefore,
,
carries an error,. Visual assessment of the imagesprovides a rough estimate for the rotational offset, ,
between the two images. Choosing two points present in
both images, and calculating the difference in angle of the
lines joining those two points with respect to the horizontal
will give. A reasonable estimate for human error,, isalso assigned.
(3)(b) Scale and rotation testing
Every loop of the algorithm there is a scale and rotation test
performed in parallel (Figure S1). The structure of each of
the tests is identical apart from the actual transformation
performed during the test (i.e. scale or rotation). One image
is transformed by some amount, the other is unchanged, and
the correlation between the images is computed. This isperformed for a range of different values using the same
image transformation on the same image and leaving the
other image unaltered. The following description of this
subsection will describe the optimisation of the scale
difference, , between the images. Mathematics for therotation test is obtained simply by interchanging, , with, ,and changing the transformation from a scale to a rotation.
The image being tested is a TIRF image,, that hasbeen rotated by the current best estimate of the angle,,prior to testing,. Given an estimate for the scale,, along with the errors associated with its initialmeasurement,
, then the search range for,
, is defined by
the closed interval:
(4)
Test values are chosen at small intervals, (5)
Where: , .The correlation between the transformed TIRF image,
, and the un-changed confocal image, ,is calculated for each, , using Equation 2.
(6) (7)
The correlation for each, , is used to build dataset, ,which cubic splines are fitted in order to predict a more
precise value for , should the optimal solution lie in theregion between two of the test values. Taking the argument
of the maximum of the interpolated data gives the optimal for this test:
-
7/26/2019 Auto-Align Manuscript Supplement Final
2/9
(8)
Errors used to define the search range (in Equation 4) are
increased intentionally byfrom the initial user set values, ( in the Java Implementation).The reason for this is to check if the global result tries to
diverge from result which is known to be within the true
search interval. If the result from Equation 8 is falls outside
the initial errors defined by Equation 4 then is re-assigned with a random value within the initial closedinterval. Alternatively, this could mean that the actual solution
lies outside the given search range and that the initial errors
were too small. The Java implementation of this algorithm
(ImageJ plug-in) performs the scale and rotation tests on
parallel threads to increase the speed of each loop of the
algorithm.
(c) Correlation, inspection, optimisation
The values from the previous step, , and, , are usedto scale and rotate the images,
and
then the correlation between them is calculated.
(9)
Where: the subscript, , denotes the current iteration;superscript, , denotes the comparison of thetransformed images. At every iteration,, is comparedwith the value from the previous iteration, , (= 0).If , then the values, , and, , areconsidered to be closer to global solution than the previous
estimates , and, , and therefore the estimate valuesare updated for the next iteration , and . If , then the correlation of transformedtest images is calculated individually with the confocal image.
The transformation that gave the best correlation with the
confocal image, when applied to the appropriate test image,
had its value saved and was used as the next estimate. The
other was discarded and the estimate from the previous
iteration was used. Limits on the search range that each test
(d) Break conditions
The final solution is generated when both, and, ,have converged to some stable result for 3 consecutiveiterations. The definition for convergence here is When all
and fall withinthe ranges , and where: , ,and are the predefined limits of convergence. The valueof the final result is equal to the average of the 3 consecutive
values that met the convergence conditions.
Algorithm 2 translation and scale
To determine the difference in translation between the
features in the TIRF and confocal images, an algorithm using
a combination of normalised cross-correlation (NCC) and an
iterative scale correlation test was implemented. The TIRF
image is first rotated by the angle determined by Algorithm
1,, then scaled using a rough scale estimate, , fromEquation 3 - . Performing normalised crosscorrelation on this image and its corresponding confocal
image, , determines an approximate measure for therelative translation, ( ), between the images (this isonly an approximation as is only an educated estimate).
A reduced search window, the effective area in, ,covered by, , translated by, ( ),is extracted:
. A scale test identical to that in
algorithm 1 is performed on down-sampled
images , and over the range definedby Equation 4, then again at their original resolution over a
reduced range to find the scale difference, . The finalresult is generated by computing NCC again on the complete
confocal image and the scaled TIRF image,
.Algorithm 3 stage drift correction
Motion due to stage drift is detected using normalised cross-
correlation performed on adjacent frames in time from a time
series data set. The assumption used here is that the
morphology of the sample does not change dramatically from
frame to frame i.e. . Images aretranslated with respect to the previous frame in the time
series. If the motion/change in morphology between frames
is significant (significant being a net change in position or
shape greater than the motion due to stage drift) the
algorithm may register images according to the net motion of
the objects in the image.
Algorithm 4 Dual wavelength, single CCD image
alignment
The regions of the images containing data corresponding to
signals from different wavelengths are defined manually and
images are separated. An identical scale test to that used in
algorithm1 is performed on the two images to determine if
there is a scale difference between the images due to a non
linear chromatic response of the microscopes magnification
optics. Normalised cross-correlation is used to register the
images. The final registered images are cropped to the
largest rectangular size where features in both images lie on
the same coordinates and contain pixels that existed in the
original recorded image i.e. there is no border extensions to
match the image sizes.
Image Padding
Images were padded with zeros prior to any image Fourier
transforms/convolutions. When convolving two images, one
image is size n1 by m1, the second is size n2 by m2, then
they are both padded with zeros to size N by M; where, N =
n1 + n21 and M = m1 + m21. Prior to padding the image
edges were profiled with a decaying exponential to prevent
aliasing artefacts due to discontinuities at the edges of the
images.
Fast normalised cross correlation
Computing the correlation coefficient for two data sets using
the traditional normalised cross-correlation method is
computationally intensive. All algorithms here use a fast
normalised cross-correlation algorithm defined by Lewis
.J.P., which computes a cross correlation in the Fourier
-
7/26/2019 Auto-Align Manuscript Supplement Final
3/9
domain normalised with a table of pre-computed tables
containing the integral of the image and image squared over
the search window (3). This is far more efficient than
computing the correlation coefficient.
2 - Robustness against noise
(a) Phantom images
The performance of the algorithms was validated using a set
of 10 different phantom images generated using basic
simulations of the image formation processes of TIRF and
Confocal microscopes imaging through the same objective.
A virtual object was created in a 3D space and convolved
with a 3D point spread function generated using a numerical
integration of the equation A10 in reference (4). The features
in the TIRF phantom images were translated by exactly -20
pixels in both x and y directions. The TIRF images were
rotated by 4, and the Confocal phantom was scaled up by a
factor of 1.1; Both image transformations were performed
using bicubic interpolation. Each test image was affected by
different levels of two different types on noise: Intensity
dependent normally distributed noise to approximate Poissonnoise (equation 10); Gaussian additive noise to simulateelectrical read noise (Equation 10). The noise levels wereincreased from no noise, to that expected in experimental
conditions, and then to much harsher conditions to find
where the limits of the algorithm lie.
(10)The signal to noise ratio (SNR) of each phantom image was
calculated using RMS signal and the RMS noise within the
area of the image covered by the cell. The reason for this
being that the surrounding area of the cell before noise
distortion was zero valued, therefore the affects of on thisregion will be zero.(a) Algorithm 1
The estimate values for rotation were chosen at random
between the ranges: 42 and 1.10.1 for each test of the
algorithm. The values were chosen at random to simulate
variation in users feature selection.
Figure S3 shows the average angle calculated using
algorithm 1 on for the 10 different phantom images. The
results show that the intensity dependent Poisson noise
negatively affects the accuracy of the angle calculation; the
tests with low levels of additive noise (level-0 (red) and level-
1 (green)) demonstrate this. The effects of the Gaussian
additive noise do not become apparent until levels 4 and 5.
The error bars show that the variance of the results
( ) generally increases with increasingPoisson noise but does not always affect the accuracy of the
average result. Most of the results here are accurate within
0.1 and no results fall outside 0.2 of the actual result.
The computational efficiency of this algorithm is poor but it is
only required to calculate the rotational offset once per
microscope set up. If implemented using parallel threads
over suitable search range for images approximately512x512 pixels a result will be calculated in approximately
30s of seconds. To maximise the accuracy of the final result
multiple high contrast, high resolution, and high SNR images
of fixed samples should be used.
(a) Algorithm 2
Before the final translation is calculated the scale difference
is calculated; the accuracy of this determines the accuracy of
the final translation.Figure S4 shows the results for the scale
calculated prior to the final translation (Figure S2). Again, the
variance of the result generally increases with the noise
levels as might be expected. The Intensity dependent
Poisson noise appears to be the main cause of inaccuracies.
The results show that the algorithm is robust against additiveGaussian noise until they are the main contributor of the total
noise.
Figure S5 shows the results for the final image translation
once the images were scaled. The actual coordinates of
translation of the TIRF image relative to the confocal images
is (20,20); 80 percent of the results fall within a distance of 1
pixel of this result, most of which fall exactly on the correct
result. Calculation of the translation is the most basic part of
the algorithm so any inaccuracies at this stage are almost
certainly due to either inaccurate calculation of the scale or
the rotational offset between images. In practice, the images
the noise conditions of the images used would not be asharsh as the images used to show the algorithm fail here.
1. Marks RJ. Handbook of Fourier analysis & itsapplications. 1 ed. New York, N.Y. ; Oxford: OxfordUniversity Press; 2009.2. Bracewell RN. Fourier analysis and imaging. NewYork: Springer US; 2003.3. Lewis J. Fast normalized cross-correlation. 1995:Citeseer; 1995. p. 120-123.4. Webb RH. Confocal optical microscopy. Reports onProgress in Physics 1996;59:427.
Figure S1: The basic structure of the optimisation algorithmdesigned to calculate the rotational offset between TIRF andConfocal microscopy images. , and , areimages from Confocal and TIRF microscopes respectively,, and , are the current estimates for the scaledifference and rotational offset between images, , and, are the values giving the highest correlation from theirrespective tests. Scale is also optimised as preciseknowledge of the scale is required in order to calculate therotation precisely.
Figure S2: The basic structure of the algorithm used todetermine the difference in translation between the TIRF and
Confocal images.
Figure S3 A graph showing the average cacuated ange for different phanto iages rotated and a scaledifference of 1.1 containing different levels of noise. TheSignal to Noise Ratio (SNR) on this graph is that of only theTIRF images. Each corresponding Confocal image has verysimilar noise levels. The error bars here represent theStandard Error of the Mean (SEM) of the calculated angles.Each colour series represents a different level of Gaussianadditive noise and has 7 increasing levels of Poisson noise.
Figure S4: The average scale calculated for the 10 phantomimage pairs. The confocal images had a scale factor 1.1
greater than the TIRF images. Error bars represent theStandard Error of the Mean (SEM). The Signal to NoiseRatio (SNR) on this graph is that of only the TIRF images.The corresponding confocal images have very similar noiselevels. Each colour series represents a different level of
-
7/26/2019 Auto-Align Manuscript Supplement Final
4/9
Gaussian additive noise and has increasing levels of Poissonnoise.
Figure S5: A plot to show the translation between TIRF-confocal phantom image pairs calculated for different levelsof noise. Prior to testing the TIRF feature was translated by -20 pixels in both x and y directions. The SNR of the imagesis indicated by the colour of the data ring. The size of eachring indicates the number of points already with that value.
Video 1
Video 1 shows movie images; the first is the original data set
which moves due to stage drift, the second shows the same
dataset corrected for stage drift, the third shows the a RGB
merge of the first two images. The RGB merge displays how
much stagedrift occurred over the 45 frames. In the red
plane is the original data and the corrected in the green
plane. The images are of the same cells shown in Figure 3 in
the main article.
-
7/26/2019 Auto-Align Manuscript Supplement Final
5/9
Figure 1
-
7/26/2019 Auto-Align Manuscript Supplement Final
6/9
Figure 2
-
7/26/2019 Auto-Align Manuscript Supplement Final
7/9
Figure 3
-
7/26/2019 Auto-Align Manuscript Supplement Final
8/9
Figure 4
-
7/26/2019 Auto-Align Manuscript Supplement Final
9/9
Figure 5