the direct use of curvelets in multifocus fusion · the direct use of curvelets in multifocus...

4
THE DIRECT USE OF CURVELETS IN MULTIFOCUS FUSION H. Hariharan, A. Koschan and M. Abidi Imaging Robotics and Intelligent Systems Laboratory, University of Tennessee, Knoxville -37996. ABSTRACT In this effort, a data-driven and application independent technique to combine focal information from different focal planes is presented. Input images, acquired by imaging systems with limited depth of field, are decomposed using a relatively new analysis tool called curvelets. The extracted curvelets are representative of polar ‘wedges’ from the frequency domain. Fusion is performed on medial and peripheral curvelets by relevant fusion rules and the fused image combines information from different focal planes, while extending the depth of field of the scene. The main contribution in this effort is the direct use of curvelets in combining multifocal images. Several illustrative examples and objective comparisons are provided. Index Terms— Multifocus fusion, image fusion, curvelet transform, extending depth of field 1. INTRODUCTION When imaging a 3-dimensional scene, it is often preferred to have all the objects comprising the scene to be in focus in the acquired image. Typically, lenses exhibit the virtue of limited depth of field (DOF) and this forbids a conventional imaging system in obtaining such an all-in-focus image. This is a major problem in many imaging applications. A few examples are the inspection of microscopic scenes and long range person tracking. In multifocus fusion, the central idea is to acquire images from different focal volumes in the 3D scene and fuse them into one image where the entire scene appears to be in focus, as shown in Figure 1. A focal volume in the scene is the 3D space in the lens’ field of view which intersects its DOF as well. In other words, the aim is to create a scene as if imaged by a lens with an extremely narrow aperture, without the sensitivity issues such lenses possess. Previous works in the literature explore various solutions to this problem by formulating techniques based on region selection [1,2], multiscale decomposition (MSD) [3,4], and learning methods [5]. A brief summary of the related work is made in Section 2. In this effort, we investigate the direct use of a relatively new decomposition tool, namely curvelet analysis for multifocus fusion. Our method falls under the category of MSD fusion methods as we segregate and fuse spectral information, drawn from the input images, to generate an all-in-focus image. In section 2, more information on the related work in this area of research is presented. Our method is formulated in section 3, after necessary explanations on curvelet analyses. Illustrative examples and comparative results are presented in Section 4, before drawing conclusions in section 5. (a) (b) (c) (d) Figure 1. An example of multifocus fusion, (a-c) input images acquired under a narrow DOF, denoted by red boundaries and, (d) a multifocus fused image, encompassed in a larger boundary, where all focal planes are in focus. 2. RELATED WORK In the literature, various solutions to the problem have been pursued based on region based methods, multiscale decomposition (MSD), and learning methods. In region based methods, the input images are divided in blocks, tiles or image segments initially. By using different sharpness 2185 978-1-4244-5654-3/09/$26.00 ©2009 IEEE ICIP 2009

Upload: others

Post on 11-Jun-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Direct Use of Curvelets in Multifocus Fusion · THE DIRECT USE OF CURVELETS IN MULTIFOCUS FUSION H. Hariharan, A. Koschan and M. Abidi Imaging Robotics and Intelligent Systems

THE DIRECT USE OF CURVELETS IN MULTIFOCUS FUSION

H. Hariharan, A. Koschan and M. Abidi

Imaging Robotics and Intelligent Systems Laboratory, University of Tennessee, Knoxville -37996.

ABSTRACT

In this effort, a data-driven and application independent technique to combine focal information from different focal planes is presented. Input images, acquired by imaging systems with limited depth of field, are decomposed using a relatively new analysis tool called curvelets. The extracted curvelets are representative of polar ‘wedges’ from the frequency domain. Fusion is performed on medial and peripheral curvelets by relevant fusion rules and the fused image combines information from different focal planes, while extending the depth of field of the scene. The main contribution in this effort is the direct use of curvelets in combining multifocal images. Several illustrative examples and objective comparisons are provided.

Index Terms— Multifocus fusion, image fusion, curvelet transform, extending depth of field

1. INTRODUCTION When imaging a 3-dimensional scene, it is often preferred to have all the objects comprising the scene to be in focus in the acquired image. Typically, lenses exhibit the virtue of limited depth of field (DOF) and this forbids a conventional imaging system in obtaining such an all-in-focus image. This is a major problem in many imaging applications. A few examples are the inspection of microscopic scenes and long range person tracking. In multifocus fusion, the central idea is to acquire images from different focal volumes in the 3D scene and fuse them into one image where the entire scene appears to be in focus, as shown in Figure 1. A focal volume in the scene is the 3D space in the lens’ field of view which intersects its DOF as well. In other words, the aim is to create a scene as if imaged by a lens with an extremely narrow aperture, without the sensitivity issues such lenses possess.

Previous works in the literature explore various solutions to this problem by formulating techniques based on region selection [1,2], multiscale decomposition (MSD) [3,4], and learning methods [5]. A brief summary of the related work is made in Section 2. In this effort, we investigate the direct use of a relatively new decomposition tool, namely curvelet analysis for multifocus fusion. Our

method falls under the category of MSD fusion methods as we segregate and fuse spectral information, drawn from the input images, to generate an all-in-focus image. In section 2, more information on the related work in this area of research is presented. Our method is formulated in section 3, after necessary explanations on curvelet analyses. Illustrative examples and comparative results are presented in Section 4, before drawing conclusions in section 5.

(a) (b)

(c) (d)

Figure 1. An example of multifocus fusion, (a-c) input images acquired under a narrow DOF, denoted by red boundaries and, (d) a multifocus fused image, encompassed in a larger boundary, where all focal planes are in focus.

2. RELATED WORK In the literature, various solutions to the problem have been pursued based on region based methods, multiscale decomposition (MSD), and learning methods. In region based methods, the input images are divided in blocks, tiles or image segments initially. By using different sharpness

2185978-1-4244-5654-3/09/$26.00 ©2009 IEEE ICIP 2009

Page 2: The Direct Use of Curvelets in Multifocus Fusion · THE DIRECT USE OF CURVELETS IN MULTIFOCUS FUSION H. Hariharan, A. Koschan and M. Abidi Imaging Robotics and Intelligent Systems

criteria one region per set of input image regions is selected and a mosaic of such selected regions is considered the fused image. This can be performed based on tiling [1,2] or through regions based on segmentation methods [6,7]. The most commonly reported issues in this family of methods are blocking effects [8]. In MSD based methods, the input images are processed into multiscale coefficients initially. Selected or processed coefficients are used in the synthesis to obtain the fused image. Fusion rules are employed in the selection or treatment of these coefficients [4,9]. The most widely reported issues in this family are ringing effects and related distortions [10]. In applications with acutely restricted depth of field, such as in the case of microscopic multifocus fusion, it is imperative to address focal overlap between input images. Here, we discuss a data-driven general purpose multifocus fusion method that is capable of fusing data from different applications, such as microscopic scene visualization and long range feature tracking.

The key contribution in our method is the fusion of multifocus images by directly using features extracted by curvelet analysis. Curvelet analysis has been found useful in applications wherein curve and line characteristics are extracted from a stack of input images and used for fusing focal content into one all in focus image. In the works of Li et al, an elaborate method which cascades curvelet and wavelet analysis for multifocus fusion has been presented [11]. In this effort, motivated by promising reports on curvelet properties claiming that the curvelet transform is suitable for representing curves and lines [11,12], we investigate the direct use of curvelets in multifocus fusion.

3. METHODOLOGY

Curvelets are a relatively new signal analysis tool introduced by Candes and Donoho [13]. Curvelets are different from other MSD methods and claim very high directional sensitivity and anisotropic virtues. Studies claim that curvelets are more appropriate for the analysis of curve and line characteristics in an image than typical MSD methods [11]. Theoretically, the curvelet transform is a multi-scale pyramid, with multiple angular direction and positions at each length and level with needle-shaped components at finer scales. Curvelets have certain geometric virtues that differentiate them from other MSD methods. The most notable, is a parabolic scaling relationship, which imposes that at a given scale, each component is contained in an envelope which is aligned on a ‘ridge’ of width 2-j and length 2-j/2. 3.1. Curvelets Initially, a local ridgelet-based curvelet transform decomposes the image into a series of disjoint coefficients. Then, each scale is analyzed by means of a local ridgelet transform. In the mathematical treatment of curvelets, we

work in the domain R2 with spatial variable x and frequency variable ωωωω. Polar coordinates in the frequency domain are represented by r and θ . For each level, j ≥ j0, a frequency window Uj is created, supported by a pair of windows, namely the radial support W(•) and angular support V(•). The frequency window is applied with scale dependent window widths, in each direction as,

( ) ,/j

Vr

W,rUjjj

πθ

2

22

22

1

4

3 (1)

where j/2 is the integer part of j/2. The support of Uj is a polar “wedge” in the frequency domain. The form Uj(r, ) + Uj(r, + ) is used in compelling symmetry to generate real-valued curvelets. These are used in the fusion of focal information subsequently. The basis function, or the ‘mother curvelet’, j(x), is defined by the mean of its Fourier transform )()(ˆ ωϕ jj Ux = . Thus, for a given input image

from a stack of N multifocal images, fi, the curvelet coefficients, at scale 2−j, orientation and position

( )

= −

θjjl

l,j

k

k,

kRx

22211 is defined by

.dx)x()x(f,f)k,l,j(c k,l,jik,l,jii ϕ=ϕ=2R

(2)

Where, ( )( ),xxR)x( )l,j(

kljk,l,j −ϕ=ϕ θ

(3)

(4)

,

cossin

sincosR

θθ−θθ

and θ−θ−

θ == RRR T1 . (5)

Subsequently, curvelets at scales 2−j are extracted by rotating and displacing the mother curvelet j(x). The rotation angles and translation parameter sequence k are defined by: = 2 · 2−j/2 · , with = 1,2,3, … such that 0 ≤ < 2 ; k = (k1, k2) ∈ Z2. For each scale j and angle l, the product of the support Uj,l and the fourier coefficients wrapped around the origin and an inverse 2d FFT is performed to synthesize the coefficients ci

D(j, l, k). More details on curvelets, the admissibility criteria for the support windows, and curvelab can be found in [13]. 3.2. Multifocus fusion using curvelets Input images are acquired from different focal volumes in a given 3-d scene. We abuse notations slightly and refer to the curvelet coefficients, ci

D(j, l, k), as ci(j,l,k) for easy reference. The indices i,j,l,k refer to the image number in the stack, scale (an integer increasing from coarsest to finest scale), orientation of polar wedge (traversing the frequency space in a clockwise sweep) and position respectively. The

2186

Page 3: The Direct Use of Curvelets in Multifocus Fusion · THE DIRECT USE OF CURVELETS IN MULTIFOCUS FUSION H. Hariharan, A. Koschan and M. Abidi Imaging Robotics and Intelligent Systems

central idea of our fusion scheme is emphasizing focal information by segregating and fusing them in the frequency domain. Fusion is performed as follows:

(1) A stack of N multifocal images are acquired from different focal volumes in a given 3-d scene.

(2) Registration is performed as necessary using a viable method. In our method, we assume co-registered input images.

(3) Each input image, fi is analyzed and a set of curvelet coefficients, namely a. Medial coefficients, μi(j,l,k)= ci(j,l,k) ∀j≤ jo and b. Peripheral coefficients, ρi(j,l,k)= ci(j,l,k) ∀ j> jo

are generated. (4) The peripheral coefficients hold the necessary

information pertaining to higher frequency information such as, but not limited to, curves and lines. The medial coefficients hold information on the trend of the image. Fusion of curvelet coefficients is performed as such,

(6)

(7)

{ }

{ } { }N,...,,m,N,...,,i

)l,k,j(N

)l,k,j(

)l,k,j()l,k,j()l,k,j(N

iiF

imiF

2121

11

∈∀∈∀

μ=μ

ρ≥ρ=ρ

=

(5) The fused coefficients,

≤μ>ρ

=0

0

jj)l,k,j(

jj)l,k,j()l,k,j(C

F

F

F , (8)

are subjected to the inverse curvelet transform and the fused image F is obtained.

4. EXPERIMENTAL RESULTS

The mechanics of the imaging system, the ambient illumination, and complexity of the 3-D scene being imaged as a multifocal stack influences the degree of finesse required to perform multifocus fusion. If there are no focal overlaps in the stack, the task becomes relatively easier. In our experiments, we have tested our method on datasets from various applications with varying degrees of scene complexity. We have compared our method with the MSD fusion method, which cascades the use of curvelets and wavelets in fusion [11]. In Figure 2, we present fusion results on the deceptively simple ‘fence’ dataset. In Figure 2(a), a fence which is imaged at an angle to the camera plane is in focus. In Figure 2(b), the back ground made up of vehicles and vegetation is in focus. There is a continuum of objects under varying degrees of blur. Tiling based methods have difficulty in fusing such data sets as block selection is a difficult problem. In Figure 2(c), fusion by the cascaded MSD based method [11] is presented. A good rendition of the fence and the vehicles in the background is seen with ringing and blurring effects. In Figure 2(d), an image fused using our method is shown, with j0=1. Upon close

examination, a sharper fused scene is visible, which is validated by objective testing as well.

(a) Input image 1 (b) Input image 2

(c) MSD method due to [11] (d) Proposed method

Figure 2. Multifocus fusion on the ‘fence’ dataset (a-b) input images with different focal planes in focus, (c) image fused using cascaded MSD fusion method [11] and (d) image fused by our method. In (d), in addition to being able to see the background, one is able to see a sharper fence, in comparison to (c).

A macroscopic ‘thumbscrew’ data set is shown in Figure 3 with similar connotation. In this example, a few images out of a stack of macroscopic images are shown in Figures 3(a-d). The macroscopic data is acquired under extremely narrow DOF with heavy focal overlap between input images. In applications such as microscopy and nanoscopy, the geometric pixel correspondence between images in a stack is near optimal due to the mechanics of the imaging system. The image fused using cascaded MSD fusion [11] is shown in Figures 3(e). The full length of the macroscopic thumbscrew is visible in the scene but the image fused by our method in Figure 3(f), with j0=1, has lesser blurring and ringing effects. Our method is completely data driven and is application independent.

The merits of these fusion methods are hard to be evaluated by human inspection. To validate our experiments, objective evaluations of the fused images are performed. The fused outputs are evaluated for overall sharpness using various sharpness measures such as the tenengrad (TG), adaptive tenengrad (ATG), laplacian (LP), adaptive laplacian (ALP), sum of modified differences (SMD) and sum of modified laplacian (SML) [14]. These measures have been found optimal for sharpness evaluation, as indicated by Yao et al [14] and Krotkov [15]. The objective results are consistent with visual inspection and concur that our method produces images with improved overall sharpness. The results of the objective testing are summarized in Table 1. The direct employment of curvelets in fusion of multifocal content is computationally lesser demanding than cascading curvelet analysis with wavelet based fusion.

2187

Page 4: The Direct Use of Curvelets in Multifocus Fusion · THE DIRECT USE OF CURVELETS IN MULTIFOCUS FUSION H. Hariharan, A. Koschan and M. Abidi Imaging Robotics and Intelligent Systems

(a) Input image 1 (b) Input image 2

(c) Input image 3 (d) Input image 4

(e) MSD method due to [11] (f) Proposed method

Figure 3. Multifocus fusion on the ‘thumbscrew’ dataset (a-d) input images with different parts of the thumbscrew in focus, (e) image fused using cascaded MSD fusion [11] and (f) image fused by our method. Notice increased sharpness, highlighted by boxes, by proposed method.

Table 1: Comparison of overall sharpness of images fused by different methods using various metrics

Cascaded MSD Fusion [11] Direct Curvelet Method

Fence Thumbscrew Fence Thumbscrew

SMD 0.935 0.428 1.140 0.479 SML 2.030 1.090 2.190 1.130 TG 3.730 2.800 3.90 2.930

ATG 4.860 3.900 5.170 4.000 LP 1.320 0.998 1.60 1.010

ALP 1.490 1.010 1.580 1.030

5. CONCLUSION

Here, a method to extend the depth of field in imaging systems with narrow DOF is presented. Our method capitalized on fusing information from the different polar ‘wedges’ of the frequency content in a stack of images. Fusion was performed directly on medial and peripheral curvelets to obtain a fused image which combines focal information from different focal volumes, while retaining the visual verisimilitude of the scene. We demonstrate multifocus fusion on datasets from different applications. Illustrative examples are presented along with comparisons. Our direct curvelet fusion method exhibits improved global sharpness in all our experiments.

6. ACKNOWLEDGEMENTS

This research was supported by the DOE URPR under grant DOE-DE-FG02-86NE37968.

7. REFERENCES [1] S. Li and B. Yang, “Multifocus image fusion using region

segmentation and spatial frequency,” Image Vis. Comput., vol. 26, no. 7, pp. 971-979, 2008.

[2] H. Zhao, Q. Li, and H. Feng, “Multifocus color image fusion in the HIS space using the sum-modified-Laplacian and a coarse edge map,” Image and Vision Computing, vol. 2, pp. 1285-1295, 2008.

[3] I. De and B. Chanda. A simple and efficient algorithm for Multifocus Image Fusion using Morphological Wavelets. IEEE Transactions in Signal Processing, vol. 86, 2006, pp. 924-936.

[4] P-L. Lin and P-Y. Huang, “Fusion methods based on dynamic-segmented morphological wavelet or cut and paste for multifocus images,” Signal Processing, vol.88, pp. 1511-1527, 2008.

[5] L. Shutao, T. K. James, and W. Yaonan, “Multifocus image fusion using artificial neural networks,” Proc. of International Conference on Machine Learning and Cybernetics, 2005, 985-997.

[6] D. Fedorov, B. Sumengen, and B. S. Manjunath. Multi-focus imaging using local focus estimation and mosaicking. IEEE ICIP 2006, pp. 2093-2096.

[7] Z. W. Liao, S. X. Hu, and Y. Y. Tang. Region-based multi-focus image fusion based on Hough transform and wavelet domain HMM. Proc of International Conference on Machine Learning and Cybernetics, 9:5490-5495, 2005.

[8] A. Goshtasby. Fusion of multi-focus images to maximize image information. Defense and Security Symposium, Orlando, Florida, 2006, pp. 17-21.

[9] J. J. Lewis, R. J. O’Callaghan, S. G. Nikolov, D. R. Bull, and C. N. Canagarajah. Region-based image fusion using complex wavelets. Proc. of International Conference on Information Fusion, 2004, 555-562.

[10] Z. Zhang and R. S. Blum. A categorization of MSD based image fusion schemes with a performance study for a digital camera application. Proc. of the IEEE, vol. 87, 1999, pp. 1315-1326.

[11] S. Li and B. Yang, “Multifocus image fusion by combining curvelet and wavelet transform,” Pattern Recog. Letters, vol. 29, pp. 1295-1301, 2008.

[12] J.L. Starck, D.L. Donoho, and E.J. Candes, “Very high quality image restoration by combining wavelets and curvelets”, SPIE Conf., vol. 4478, pp. 9-19, 2001.

[13] E.J. Candes, L. Demanet, D.L. Donoho, and L. Ying, “Fast discrete curvelet transforms,” Multiscale Model. Simul., Vol. 5, no. 3. pp. 861-899, 2006.

[14] Y. Yao, B. Abidi, N. Doggaz, and M. Abidi, “Evaluation of sharpness measures and search algorithms for the auto-focusing of high magnification images”, SPIE Defense and Security Symposium, Orlando, FL, Apr. 2006.

[15] E. P. Krotkov, Active Computer Vision by Cooperative Focusing. USA: Springer Verlag, 1989.

2188