hazeremoval_singleimage_luminance

Upload: aymangaffer

Post on 03-Apr-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/28/2019 HazeREmoval_SingleImage_Luminance

    1/4

    Automatic Image Haze Removal Based on

    Luminance Component

    Fan Guo Zixing Cai Bin Xie Jin TangSchool of Information Science and Engineering, Central South University,

    Changsha, China

    AbstractIn this paper, we propose a simple but effective method

    for visibility restoration from a single image. The main advantage

    of the proposed algorithm is no user interaction is needed, this

    allows our algorithm to be applied for practical applications,

    such as surveillance, intelligent vehicle, etc. Another advantage

    compared with others is its speed, since the cost of obtaining

    transmission map is greatly cut down by using Retinex algorithm

    on luminance component. A comparative study and quantitative

    evaluation is proposed with the main present-day methods whichdemonstrate that similar or better quality results are obtained.

    Keywords-haze removal; YCbCr; luminance component;

    Rentinex; transmission map

    I. INTRODUCTIONHaze removal is highly desired in computer vision

    application, removing haze can significantly increase thevisibility of the scene. Since the scene visibility is decreasedand the features, such as the contrast and color of the imageobjects are attenuated under haze condition, so we need toeliminate the haze effect of the scene objects.

    Recently, the methods of image haze removal can beclassified into two major types: haze image enhancement basedon image processing and image restoration based on physicalmodel. The image enhancement method can effectivelyincrease the contrast and improve the visual effects, but maycause some lose to the stressed information. The classicmethods of this kind are histogram equalization, homomorphicfilter [1], wavelet transform [2], Retinex algorithm [3],luminance and contrast transform and so on. While the imagerestoration is researching on the degradation process of hazeimage, establishing the degradation model to get theundisturbed original image or its best estimator, so the qualityof the haze image can be improved. Compared with the imageenhancement method, this kind of method can get naturalresult, and assure no information will be lost to obtain the idealrestoration effect.

    The degradation model that is widely used in imagerestoration method is the haze image model proposed byMcCartney [4], and a few approaches have been proposed.Very recently and for the first time in [5, 6, 7], threeapproaches were proposed which work from a single imagewith a stronger prior or assumption. Fattal [5] estimates thealbedo of the scene and then infers the medium transmission.This approach cannot well handle heavy haze images and mayfail in the cases that the assumption is broken. Tan [6] removes

    the haze by maximizing the local contrast of the restored image,but his approach may not always achieve equally good resultson very saturated scenes. In comparison, Hes approach [7] isthe most attracting one. This approach based on the darkchannel prior. Using this prior, the thickness of the haze can bedirectly estimated and a high quality haze-free image can beobtained. However, the disadvantage of these three algorithmsis a processing time of 35 seconds, 5 to 7 minutes and of 10 to

    20 seconds on a 600*400 image, respectively.

    Here, we propose an automated dehazing approach which ismuch faster compared to [5, 6, 7]. This approach is based onthe observation that when the haze image transforms fromRGB to YCbCr color space, and uses Multi-Scale Retinex(MSR) algorithm to the luminance component, then doessubtraction operation and median filter on the map, thetransmission map whose function is similar to Hes approach isobtained. It is much faster since the cost of obtainingtransmission map is cut down, and it is able to achieve equallyand sometimes even better results.

    II. BACKGROUNDA. Haze Image Model and Dark Channel Prior

    Haze image model (also called image degradation model),proposed by McCartney [4] in 1975, the model consists ofdirect attenuation model and air light model. Direct attenuationmodel describes the scene radiance and its decay in themedium, while air light results from previously scattered lightand leads to the shift of the scene color. The formation of ahaze image model is as following:

    ( ) ( ) ( ) (1 ( ))I x J x t x A t x= + (1)

    Where I is the observed luminance and is also the inputhaze image, J is the scene radiance and is also the restored

    haze-free image, A is the global atmospheric light which can

    be obtained by using dark channel prior, and tis the mediumtransmission describing the portion of the light that is not

    scattered and reaches the camera. Theoretically, the goal of

    haze removal is to recoverJ,A, and tfrom I.

    The dark channel prior proposed by Hes approach [7] is a

    kind of statistics of the haze-free outdoor images. It is based

    on a key observation that most local patched in the haze-freeoutdoor images contain some pixels which have very low

    This work is sponsored by National Natural Science Foundation of Chinagrant DUE 90820302

    978-1-4244-3709-2/10/$25.00 2010 IEEE

  • 7/28/2019 HazeREmoval_SingleImage_Luminance

    2/4

    intensities in at least one color channel. Formally, for an

    imageJ, we define

    { , , } ( )( ) min ( min ( ( )))dark c

    c r g b y xJ x J y

    = (2)

    Where cJ is a color channel ofJand ( )x is a local patch

    centered at x. we call

    dark

    J the dark channel ofJ, and we alsocall the above statistical observation the dark channel prior.Using this prior with the haze imaging model, the value of air

    light A can be directly estimated to recover a high quality

    haze-free image. In our approach, we also use the haze imagemodel and dark channel prior to remove the haze effect.

    B. Retinex used in luminance componentRetinex theory deals with compensation for illumination

    effects in images. The primary goal is to decompose a given

    image S into two different images, the reflectance image R,

    and the illumination image L, such that, at each point (x, y) inthe image domain, S(x, y) = R(x, y) *L(x, y). The Retinex

    methodology was motivated by Land's landmark research of

    the human visual system [8]. The benefits of such

    decomposition include the possibility of removing

    illumination effects, enhancing image edge, and correcting the

    colors in images by removing illumination induced color

    shifts.

    The original idea of our approach lies in the process of

    obtaining transmission map. According to Hes approach, the

    transmission map is in fact an alpha map with clear edge

    outline and depth layer of the scene objects, which is used toestimate the thickness of the haze. While the MSR algorithm

    covers the features of multi-scale by adjusting the scale

    parameters, the algorithm synthesize the advantages of the

    dynamic range compression, edge detail enhancement of the

    small scale, and the balance of the color of the medium and

    large scale. So image sharpness, dynamic range compressionof gray level, contrast enhancement and color balance can be

    realized at the same time. Applying MSR to the luminance

    component of the haze image in YCbCr color space, then does

    linear transformations and median filter operation, the

    transmission map of our approach can be obtained, so it

    realizes the automatic and quick acquisition of transmission

    map.

    III. IMAGE HAZING REMOVAL METHOD BASED ONLUMINANCE COMPONENT

    A. Estimating the Transmission Based On luminanceComponent

    Note that, the haze imaging Equation (1) has a similar

    form with the image matting equation. A transmission map is

    exactly an alpha map. Therefore, Hes approach applied a soft

    matting algorithm [9] to refine the transmission. While the

    specific steps of our approach to estimate transmission map

    based on luminance component is as following:Here, we first choose the transformed color space. For a

    RGB image, in order to separate its luminance component

    from chroma component, color space should be transformed to

    HIS, YUV, etc to assure the steps can be done only in

    luminance component. Recently, the most widely used is HIS.However, the transformation between HIS and RGB has to do

    trigonometric function calculation, which costs much time and

    needs a lot of calculating works. So we choose YCbCr, the

    reason is that the transformation between YCbCr and RGB is

    refer to simple algebraic operation, so it needs little

    calculating work and the speed of the calculation is relativelyfast.

    Then, we use MSR algorithm to the luminance component,

    the process can be expressed as follows:

    1

    ( , ) (log ( , ) log[ ( , )* ( , )])N

    m n n

    n

    R x y w Y x y F x y Y x y=

    = (3)

    Where Rm(x, y) is the output of the transformation to the

    luminance component image by using MSR algorithm, N is

    the number of the scale (usually choose three), Wn is the

    weight corresponding to each scale, Y(x, y) is the distribution

    of luminance image, Fn(x, y) is the nth wrap around functionwhose form is Gaussian, and the around function should cover

    all kinds of scale. Experiment shows that the scale should be

    chosen with small, medium and large value for most images.

    The measurement of choosing scale is keeping the color of

    luminance component image balance and making the contrastof the component image enhance. The sum of the scale value

    should be one. The new luminance image obtained by this step

    is the fundamental of obtaining transmission map whose

    function is similar to the one obtained by soft matting

    algorithm in Hes approach.Note that the differences between the new luminance

    image and the alpha map obtained by soft matting algorithm,

    we want to make both image tend to the same effect andfunction. The solution to this problem is using the

    transmission adjusting parameter C minus the value of each

    pixel. Specifically, there are two steps to adjust transmission

    map: the first one is determining the value of atmospheric

    light A. We first picking the top 0.1% brightest pixels in the

    dark channel, the pixel with highest luminance in the input

    imageIis selected as the atmospheric light. The second step is

    determining the value ofCon condition that the value ofA is

    determined. The value ofCis application-based. We fix it to

    1.08 for all results reported in this paper.

    B. Medium Filter and Recovering the Scene RadianceAccording to the haze image model we can recover thescene radiance with the transmission map. Because the direct

    attenuation term J(x)t(x) can be very close to zero, so the

    transmission t(x) is restricted to a lower bound t0, a typical

    value of t0 is 0.1. The final scene radiance J(x) is recovered

    by:

    0

    ( )( )

    max( ( ), )

    I x AJ x A

    t x t

    = + (4)

  • 7/28/2019 HazeREmoval_SingleImage_Luminance

    3/4

    From the above expression we can see that the nearer the

    outline of scene objects in the new luminance image to the

    input haze image, the easier to lose details in restored image.We can see from Fig. 1, if the transmission map shown as the

    green continuous curve is similar to the input image shown as

    the black continuous curve, the result will be a flat curve

    shown as the red line after division operation. On the contrary,

    if the green curve is more flat, the red curve will be more

    variable, which means the dehaze result will contain moredetails. Therefore, in order to flatten the outline of scene

    objects in luminance image, and make the haze removal image

    contains more details, we use medium filter to the luminance

    image. Fig. 2 shows an example of the transmission map

    obtained without or with medium filter and their respective

    restored haze-free result. Besides, experiment results show

    that if the local patch chosen too small, the scene objects in

    haze removal image seem lack of three-dimensional

    appearance; while if the local patch is too large, the color of

    scene objects in restored image seems too dark. When the

    patch size is set to 5*5 for a 200*200 image, the haze removal

    effect is the best. In this way, the function of the new

    luminance image after medium filter can estimate the

    thickness of the haze.

    Since now we already know the input haze image I(x),

    the transmission map t(x) and the atmospheric lightA, we can

    take these values into (4) to obtain the final restored haze-freeimageJ(x). Fig. 3 (b) is the transmission map estimated from

    color image Fig. 3 (a) using our approach. Fig. 3 (c) is our

    final recovered scene radiance to Fig. 3 (a).

    Figure 1. The input haze image is shown as the black continuous curve. The

    transmission map is shown as green continuous curve. The restored haze-freeresult is shown as the red line.

    (a) (b) (c) (d)

    Figure 2. Medium filter result. (a) restored haze-free result without filter. (b)

    transmission map obtained without filter. (c) restored haze-free result withfilter. (d) transmission map obtained with filter.

    (a) (b) (c)

    Figure 3. Haze removal result. (a) input color image. (b) transmission map

    after using method based on luminance component. (c) final dehazing image

    using our method.

    IV. EXPERIMENTAL RESULTS AND ANALYSISA. Qualitative Comparison

    In our experiments, we perform the improved method by

    executing MATLAB on a PC with 3.00GHz Intel Pentium

    Dual-Core Processor. Fig. 4 shows a comparison betweenresults obtained by Fattal [5] and our algorithm. It can be seen

    that our approach outperforms Fattals in situations of dense

    haze. Fattals method is based on statistics and requires

    sufficient color information and variance. If the haze is dense,

    the color is faint and variance is not high enough for his

    method to reliably estimate the transmission. Next, we

    compare our approach with Tans work [6] in Fig. 5. The

    colors of his result are often over saturated. Since his

    algorithm is not physically based and may underestimate the

    transmission. Our method can generate comparable results

    without any halo artifacts. We also compare our method with

    Hes very recently work [7] in Fig. 6. The overall result of our

    approach is approximately the same as Hes approach.

    Compared with the input haze image, the contrast and clarity

    of the images are enhanced significantly. Fig. 7 allows the

    comparison of our results with three state of the art visibility

    restoration algorithms: Fattal [5], Tan [6] and He [7]. Noticethat the results obtained with our algorithm seems visually

    close to the result obtained by He, with less halo artifacts

    compared with Tan.

    Figure 4. Comparison with Fattals work[5]. Left: input image. Middle:

    Fattals result. Right: our result.

    Figure 5. Comparison with Tans work[6]. Left: input image. Middle: Tans

    result. Right: our result.

    Figure 6. Comparison with Hes work[7]. Left: input image. Middle: Hes

    result. Right: our result.

    Figure 7. From left to right, the input image and the results obtained by

    Fattal [5], Tan [6], He [7] and our algorithm.

  • 7/28/2019 HazeREmoval_SingleImage_Luminance

    4/4

    B. Quantitative EvaluationTo quantitatively assess and rate these four methods, we

    first consider the processing time. Our method takes about 10-12 seconds to process a 600*400 pixel image in Matlab

    programming environment. From Tab. I, we can see that our

    method is about two times and three times faster than the

    speed of Hes and Fattals method respectively and is far

    faster than Tans method.

    Then, we use the method of visible edges segmentation

    proposed in [10]. Here, we transform the color level image to

    the gray level image, and use three indicators e , r and tocompare two gray level images: the input image and the

    restored image. The visible edges in the image before and after

    restoration are selected by a 5% contrast thresholding, which

    allows computing the rate e of edges newly visible afterrestoration. Then, the ration r of the average gradient afterand before restoration is computed. This indicator r estimatesthe average visibility enhancement obtained by the restoration

    algorithm. At last, the percentage of pixels which becomes

    completely black after restoration is computed.

    These indicators e , r and are evaluated for Fattal [5],Tan [6], He [7] and our approach on four images, see Tab. II,

    Tab. III and IV. In each approach, the aim is to increase the

    contrast without losing some visual information. Hence, good

    results are described by high values of e and r and lowvalues of . From Tab. II, we deduce that depending of the

    image, Tan algorithm has more visible edges than He, Fattaland our algorithm. From Tab. III, we can order the four

    algorithms in decreasing order with respect to average

    gradient: Tan, our, He and Fattal. This confirms our

    observations on Fig. 4, Fig. 5, Fig. 6 and Fig. 7, and one can

    notice that the contrast of the Tans algorithm has been

    increased probably too strongly. Tab. IV gives the percentage

    of pixels which become completely black after restoration.

    Compared to others, ours and Hes algorithm give the smallest

    percentage.

    TABLE I. COMPARISON OF ALGORITHMS RUNNING TIME (S)Figure Fattal Tan He Our

    Fig. 4(441*450)

    Fig. 5(835*557)

    Fig. 6(1000*327)Fig. 7(576*768)

    29.8756

    67.9213

    47.235464.9538

    324.4260

    746.7254

    249.2972527.5627

    15.8185

    42.3780

    23.153932.1535

    8.4680

    29.7660

    12.609025.5160

    TABLE II. RATE e OF NEW VISIBLE EDGESe Fattal Tan He OurFig. 4

    Fig. 5Fig. 6

    Fig. 7

    1.01

    7.210.94

    0.23

    1.54

    7.622.74

    1.71

    1.42

    7.361.22

    0.80

    1.47

    7.400.92

    1.78

    TABLE III. RATIO r OF THE AVERAGE GRADIENTr Fattal Tan He Our

    Fig. 4Fig. 5

    Fig. 6

    Fig. 7

    1.292.18

    1.26

    1.09

    1.592.32

    1.48

    1.40

    1.342.24

    1.28

    1.27

    1.402.30

    1.38

    1.37

    TABLE IV. PERCENTAGE OF PIXELS WHICH BECOMES COMPLETELY

    BLACK AFTER RESTORATION(%)

    Fattal Tan He OurFig. 4

    Fig. 5

    Fig. 6Fig. 7

    1.11

    1.02

    0.881.05

    1.29

    0.49

    0.360.76

    0.82

    0.34

    0.170.30

    0.92

    0.27

    0.250.29

    V. CONCLUSIONSIn this paper, we have proposed an automatic image hazing

    removal method based on luminance component, for singleimage haze removal. The approach introduces the MSRalgorithm to the luminance component with the color space

    transform, and then does subtraction operation and medianfilter on the component. Its main advantages are its speed andno user interaction is needed. Results shows that it achieves asgood or even better results compared to the main present-dayalgorithms as illustrated in the experiments. The proposedmethod may be used with advantages as pre-processing inmany systems, such as surveillance, topographical survey,intelligent vehicles, etc.

    REFERENCES

    [1] Ming-Jung Seow, Vijayan K. Asari. Ratio rule and homomorphic filterfor enhancement of digital colour image[J]. Neurocomputing, 2006,69(7):954-958

    [2] Fabrizio Russo, An Image Enhancement Technique CombiningSharpening and Noise Reduction[J], IEEE Transactions onInstrumentation and Measurement, 2002, 51(4):824-828

    [3] Dongbin Xu, Chuangbai Xiao. Color-preserving defog method for foggyor haze scenes [C]// Proceedings of the 4th International conference oncomputer vision theory and applications. IEEE Press, 2009: 69-73

    [4] McCartney E J. Optics of Atmosphere: Scattering by Molecules andParticles[M]. New York: John Wiley and Sons, 1976:23-32.

    [5] R. Fattal. Single image dehazing. In ACM SIGGRAPH, 2008:1-9[6] R. Tan. Visibility in bad weather from a single image. CVPR, 2008:1-8[7] Kaiming He, Jian Sun, and Xiaoou Tang. Single Image Haze Removal

    using Dark Channel Prior. IEEE Conference on Computer Vision andPattern Recognition (CVPR), June 2009

    [8] E. H. Land, The Retinex Theory of Color Vision, Sci. Amer., Vol. 237,pages 108-128, 1977.

    [9]

    A.Levin, D.Lischinski, and Y.Weiss. A closed form solution to naturalimage matting. CVPR, 1:61-68,2006.

    [10]N. Hautiere, J.-p. Tarel, D. Aubert, and E. Dumont. Blind contrastenhancement assessment by gradient ratioing at visible edges. ImageAnalysis & Stereology Journal, 27(2):87-95, June 2008