effective image enhancement techniques for fog-affected...

8
IET Image Processing Research Article Effective image enhancement techniques for fog-affected indoor and outdoor images ISSN 1751-9659 Received on 5th October 2016 Revised 15th October 2017 Accepted on 29th October 2017 E-First on 8th January 2018 doi: 10.1049/iet-ipr.2016.0819 www.ietdl.org Kyungil Kim 1 , Soohyun Kim 1 , Kyung-Soo Kim 1 1 Department of Mechanical Engineering, KAIST, Daejeon 305-701, Korea E-mail: [email protected] Abstract: Over the past decade, much research has been done to improve single-fog images. However, most of these have concentrated on outdoor environments and little has been done for indoor environments. In this study, an effective method of removing fog from images both indoors and outdoors is presented. A new single image enhancement approach is based on mixture of dark channel prior (DCP) and contrast limited adaptive histogram equalisation with discrete wavelet transform (CLAHE-DWT) algorithms. With the DCP algorithm using modified transmission map, the authors obtained fast processing speed and clean dehazed image without refining process. The CLAHE and DWT methods improved the contrast and sharpness of images. Finally, an enhanced image was produced by fusing the CLAHE and DWT images. To demonstrate the effectiveness of the proposed method, the authors performed objective image quality assessments, and so on. Through a variety of experiments for various indoor and outdoor images with fog, the proposed method was proven to be highly effective. 1 Introduction Recently, the number of cameras installed in both indoor and outdoor settings has been increasing rapidly due to the development of digital technology. Traffic information collection devices, black boxes in cars, vehicle cameras, cameras on drones and unmanned vehicles, process-monitoring cameras, and CCTV inside and outside of buildings are just a few examples. It is important to be able to guarantee good quality images from each of the cameras in these various environments, and, thus, image enhancement plays a very important role for many image processing and vision applications. Many studies have been conducted on the basis of quality demands and many great results have been produced in the field of computer vision. However, studies on various environmental factors such as fog, rain, snow, dust, and so on are few. Fog and dust in particular reduce visible distance exponentially. The scattering and attenuation of light make the colour of surrounding objects appear similar with very low saturation. It is difficult to distinguish between objects under such conditions because the boundaries between the background and the object become obscured. In recognition of this difficulty, many studies on fog removal have been pursued. These can be broadly divided into methods that use contrast enhancement and those that use fog modelling. As represented by multi-scale retinex and contrast limited adaptive histogram equalisation (CLAHE), contrast enhancement techniques involve a contrast expansion process according to the contrast distribution of respective areas after dividing an image into one or more sections [1–4]. These methods, which avoid a complicated modelling process, perform well on some images but cause contrast overstretching or noise amplification problems in others due to a dependence on either whole or local contrast distribution information. Lidong et al. [5] attempted to lighten a dark image using both discrete wavelet transform (DWT) and CLAHE. These researchers tried to solve contrast overstretching and noise enhancement problems using CLAHE on only low-frequency areas of the image. Another method of improving quality is by using fog modelling to optically reconstruct the image by calculating parameters based on the fog itself. This approach is subject to a process of estimating image transmission as associated with scene depth. Transmission can be expressed by an exponential function according to distance and amount of haze. Fog can then be removed by calculating the appropriate transmission details of the respective object by estimating the distance between the camera and the object. Schechner et al., Narasimhan et al., and Kopf et al. [6–8] proposed fog removal using different polarising filters at the same location; multiple images acquired under a variety of weather conditions; or additional information, such as global positioning system data. However, these methods for obtaining the depth of fog incur difficulties such as the need for additional equipment or for repeatedly obtaining an image from the same place under a variety of weather conditions. In reality, these methods are of limited use. Many studies on single image fog removal techniques have been conducted in recent years. Fog removal from a single image is based on relative attenuation of an optimisation algorithm and distance information estimated via statistical characteristics. Tan [9] proposed a method for removing fog by identifying affected areas and maximising local contrast. This approach is based on the assumptions that adjacent pixels will have a similar depth value, and the image without fog will have a higher contrast than fog image. Fattal [10] proposed a method for removing fog by estimating reflectance in the fog by using a signal correlation, an independent component analysis technique, and the Markov random field. Tarel and Hautiere [11] used a method to estimate the effect of fog on each pixel instead of calculating the amount of attenuation for each pixel, in contrast to conventional methods. He et al. [12] proposed the dark channel prior (DCP) approach, which has become the preferred method in recent years. These researchers calculated airlight and transmission details using statistical analysis to determine a minimum value (dark channel) for the RGB colour values of fog-free areas of the image, with all local areas or windows not including sky imagery close to zero. This method is known to perform well on various types of fog- affected images. However, halo artefacts occur easily if there is a small contrast range, or fog may not be removed from the border areas of an image. A soft matting algorithm can be used to estimate a refined transmission from the initially generated block form. However, this has the disadvantage of requiring a long calculation time and a lot of memory due to the extremely large matrix it employs [13]. Therefore, studies to replace the existing soft matting algorithm are ongoing. Yang et al. [14] proposed applying a histogram specification method instead of a transmission refinement algorithm after the DCP process. This has the advantage of improving the clarity of images with fog while requiring a relatively small amount of IET Image Process., 2018, Vol. 12 Iss. 4, pp. 465-471 © The Institution of Engineering and Technology 2017 465

Upload: others

Post on 11-Oct-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Effective image enhancement techniques for fog-affected ...download.xuebalib.com/15qwvK5fQ80M.pdf · provides a brief review of fog image modelling. Section 3 describes the proposed

IET Image Processing

Research Article

Effective image enhancement techniques forfog-affected indoor and outdoor images

ISSN 1751-9659Received on 5th October 2016Revised 15th October 2017Accepted on 29th October 2017E-First on 8th January 2018doi: 10.1049/iet-ipr.2016.0819www.ietdl.org

Kyungil Kim1, Soohyun Kim1 , Kyung-Soo Kim1

1Department of Mechanical Engineering, KAIST, Daejeon 305-701, Korea E-mail: [email protected]

Abstract: Over the past decade, much research has been done to improve single-fog images. However, most of these haveconcentrated on outdoor environments and little has been done for indoor environments. In this study, an effective method ofremoving fog from images both indoors and outdoors is presented. A new single image enhancement approach is based onmixture of dark channel prior (DCP) and contrast limited adaptive histogram equalisation with discrete wavelet transform(CLAHE-DWT) algorithms. With the DCP algorithm using modified transmission map, the authors obtained fast processingspeed and clean dehazed image without refining process. The CLAHE and DWT methods improved the contrast and sharpnessof images. Finally, an enhanced image was produced by fusing the CLAHE and DWT images. To demonstrate the effectivenessof the proposed method, the authors performed objective image quality assessments, and so on. Through a variety ofexperiments for various indoor and outdoor images with fog, the proposed method was proven to be highly effective.

1 IntroductionRecently, the number of cameras installed in both indoor andoutdoor settings has been increasing rapidly due to thedevelopment of digital technology. Traffic information collectiondevices, black boxes in cars, vehicle cameras, cameras on dronesand unmanned vehicles, process-monitoring cameras, and CCTVinside and outside of buildings are just a few examples. It isimportant to be able to guarantee good quality images from each ofthe cameras in these various environments, and, thus, imageenhancement plays a very important role for many imageprocessing and vision applications. Many studies have beenconducted on the basis of quality demands and many great resultshave been produced in the field of computer vision. However,studies on various environmental factors such as fog, rain, snow,dust, and so on are few.

Fog and dust in particular reduce visible distance exponentially.The scattering and attenuation of light make the colour ofsurrounding objects appear similar with very low saturation. It isdifficult to distinguish between objects under such conditionsbecause the boundaries between the background and the objectbecome obscured. In recognition of this difficulty, many studies onfog removal have been pursued. These can be broadly divided intomethods that use contrast enhancement and those that use fogmodelling. As represented by multi-scale retinex and contrastlimited adaptive histogram equalisation (CLAHE), contrastenhancement techniques involve a contrast expansion processaccording to the contrast distribution of respective areas afterdividing an image into one or more sections [1–4]. These methods,which avoid a complicated modelling process, perform well onsome images but cause contrast overstretching or noiseamplification problems in others due to a dependence on eitherwhole or local contrast distribution information. Lidong et al. [5]attempted to lighten a dark image using both discrete wavelettransform (DWT) and CLAHE. These researchers tried to solvecontrast overstretching and noise enhancement problems usingCLAHE on only low-frequency areas of the image.

Another method of improving quality is by using fog modellingto optically reconstruct the image by calculating parameters basedon the fog itself. This approach is subject to a process of estimatingimage transmission as associated with scene depth. Transmissioncan be expressed by an exponential function according to distanceand amount of haze. Fog can then be removed by calculating the

appropriate transmission details of the respective object byestimating the distance between the camera and the object.Schechner et al., Narasimhan et al., and Kopf et al. [6–8] proposedfog removal using different polarising filters at the same location;multiple images acquired under a variety of weather conditions; oradditional information, such as global positioning system data.However, these methods for obtaining the depth of fog incurdifficulties such as the need for additional equipment or forrepeatedly obtaining an image from the same place under a varietyof weather conditions. In reality, these methods are of limited use.

Many studies on single image fog removal techniques havebeen conducted in recent years. Fog removal from a single image isbased on relative attenuation of an optimisation algorithm anddistance information estimated via statistical characteristics. Tan[9] proposed a method for removing fog by identifying affectedareas and maximising local contrast. This approach is based on theassumptions that adjacent pixels will have a similar depth value,and the image without fog will have a higher contrast than fogimage. Fattal [10] proposed a method for removing fog byestimating reflectance in the fog by using a signal correlation, anindependent component analysis technique, and the Markovrandom field. Tarel and Hautiere [11] used a method to estimate theeffect of fog on each pixel instead of calculating the amount ofattenuation for each pixel, in contrast to conventional methods.

He et al. [12] proposed the dark channel prior (DCP) approach,which has become the preferred method in recent years. Theseresearchers calculated airlight and transmission details usingstatistical analysis to determine a minimum value (dark channel)for the RGB colour values of fog-free areas of the image, with alllocal areas or windows not including sky imagery close to zero.This method is known to perform well on various types of fog-affected images. However, halo artefacts occur easily if there is asmall contrast range, or fog may not be removed from the borderareas of an image. A soft matting algorithm can be used to estimatea refined transmission from the initially generated block form.However, this has the disadvantage of requiring a long calculationtime and a lot of memory due to the extremely large matrix itemploys [13]. Therefore, studies to replace the existing softmatting algorithm are ongoing.

Yang et al. [14] proposed applying a histogram specificationmethod instead of a transmission refinement algorithm after theDCP process. This has the advantage of improving the clarity ofimages with fog while requiring a relatively small amount of

IET Image Process., 2018, Vol. 12 Iss. 4, pp. 465-471© The Institution of Engineering and Technology 2017

465

Page 2: Effective image enhancement techniques for fog-affected ...download.xuebalib.com/15qwvK5fQ80M.pdf · provides a brief review of fog image modelling. Section 3 describes the proposed

computation to reduce the halo phenomenon, but saturationdistortion tends to occur during histogram processing in black andwhite areas. Tomasi and Manduchi [15] gained transmission-preserving edge information using a cross-bilateral filter instead ofa matting algorithm. With this approach, it was possible to obtainapproximately the same result image without using excessivememory, as was required for matting. However, the benefit in termsof speed was less pronounced due to the need to apply the filter toeach pixel. He et al. proposed a guided filter to reduce calculationtime and allow for effective transmission refinement [16, 17].Nonetheless, the halo and block phenomena partially appeared indehazed images, and the processing time of filtering to removehaze was still long overall.

In this study, we used a conventional DCP algorithm butmodified the transmission component. We calculated thetransmission for each pixel from the inverse of the dark channelvalue. As a result, we obtained finer transmission than with theoriginal method and skipped refining processes such as softmatting or other filtering algorithms. The algorithm processingtime is about three to four times faster than the conventionalalgorithm. After DCP, we generated a contrast-improved imageusing CLAHE and a sharpened image using DWT. Finally, weproduced an enhanced image by fusing these two. To assess theenhanced image, an objective image quality assessment wasperformed. We also evaluated the proposed method using imagematching and computation time.

The rest of this paper is structured as follows. Section 2provides a brief review of fog image modelling. Section 3describes the proposed fog image enhancement method in detail.Section 4 presents experimental results, and the final conclusionsare given in Section 5.

2 Fog image modelling2.1 Fog model

The behaviour of light entering a camera is defined by the basiclaws of physics. First, light reflected off an object is attenuated bythe atmosphere, including dust particles and water. Likewise, lightcoming from a light source is scattered towards the camera andcauses a shift in colour. These phenomena increase with thedistance between an object and the camera. Meanwhile, fogremoval requires the detection of foggy areas in an original imageand extraction appropriate to the degree of fog, which can bedescribed as the process of removing the fog from fog-affectedareas. An input image I(x) containing fog can be used as follows inthe fog model equation:

I(x) = J(x)t(x) + A 1 − t(x) (1)

where I(x) is an image obtained with a camera, J(x) represents thefinal image with fog removed, A is a global fog value that describesthe degree of fog influence over all pixels, t(x) is a transmissionvalue indicating the degree of successful image capture by thecamera without colour scattering, and x indicates a pixel. Toestimate A, He et al. [12] selected the top 0.1% brightest pixels inthe dark channel. Among these pixels, the pixels with highestintensity in the image I(x) are selected as the global fog value. Thist(x) is represented by an exponential decay function related to pixeldepth, as shown in the following equation:

t(x) = e−βd(x) (2)

where β is a scattering coefficient, and d(x) represents the distancefrom the object to the camera. The more distant the object, the lessinformation is transferred to the camera because the degree ofscattering increases, and the colour of the fog itself appears morestrongly due to the fog value A, which is reflected more greatly.Therefore, fog removal aims to restore J(x) using A and t(x) fromI(x). In (1), the first term on the right side indicates directattenuation, showing how directly the light is reaching the cameralens, and the second term indicates airlight, the light scattered bysuspended particles in the air before reaching the camera lens.

2.2 Fog model

In (2), the estimated distance between the camera and the object isthe most important factor in the transmission calculation. In thisstudy, we used the DCP algorithm proposed by He et al. toestimate the distance between the camera and the object [12]. DCPuses an analysis of the statistical characteristics of a normal imagesuch that at least one of the RGB colour values of pixels from ahaze-free region converges to zero. Jdark(x), the dark channel ofpixel x from any foggy area Ω(x), is shown in the followingequation:

Jdark(x) =c ∈ (r, g, b)min

y ∈ Ω(x)min (Jc(y)) (3)

where Ω(x) represents a local area with centre x, and y is a pixelincluded in Ω(x). Jc(y) represents each channel value for pixel y.Jdark(x), the dark channel from a non-fog zone, converges to zero.Equation (4) can be obtained by substituting (3) into (1) since fogvalue A is not 0 in the fog-affected image I(x)

t(x) = 1 − ω ×c ∈ (r, g, b)min

y ∈ Ω(x)min Ic(y)

Ac (4)

where ω is the fog weight and 0.8 in this experiment. However, thetransmission obtained via (4) will not match the edge of the inputimage because it was calculated to obtain a minimum value withina local area. Thus, a halo effect occurs. Therefore, a refiningprocess for the transmission is necessary to improve the result. Adehazed image can be derived from (1) as follows:

J(x) = I(x) − Amax (t(x), t0) + A (5)

where t0 refers to the low boundary of the transmission and has avalue in the range of [0.1–0.03]. Noise in the restored image can besuppressed by setting a threshold value for t0 since J(x) develops avery large degree of noise if the value of t(x) is too small.

3 Method for enhancing fog images3.1 Modified transmission

In this study, a DCP algorithm was first used to dehaze the image.With the conventional DCP algorithm, a restored image is liable toshow a halo phenomenon caused by fog that was not removed fromboundary areas of the image and areas with abrupt contrast.Therefore, a filtering process such as a soft matting algorithm isneeded to refine the estimated transmission from its block form.This refining process adds to processing time and memoryallocation, however. Therefore, the calculation processing fortransmission was modified from the conventional algorithm, as inthe following equation:

t(x) = 1 − μ ×c ∈ (r, g, b)min Ic(x) (6)

where μ is a histogram range adjustment variable and has a valuebetween [0.6 and 0.8]. In (4), transmission was calculated using apredetermined block. However, in (6), minimum of RGB values foreach pixel in the fog-affected image I(x) was obtained, and thisvalue was multiplied by μ so that transmission was calculated bysubtracting from 1. Here, the histogram distribution of the darkchannel representing the minimum value for each pixel wasanalysed and the histogram width [Lo, Hi] was adjusted to within0.6. This method produces a new refined transmission directly,without additional refinement process. This transmission map ismore detailed than the original DCP transmission map and alsoproduces results very similar to those obtained with conventionalsoft matting or other refining algorithms. Therefore, an additionalrefining process is not needed.

Fig. 1 shows transmission maps and resulting images obtainedusing the proposed method and other conventional methods.Fig. 1a is the input image, and Fig. 1b is the transmission map and

466 IET Image Process., 2018, Vol. 12 Iss. 4, pp. 465-471© The Institution of Engineering and Technology 2017

Page 3: Effective image enhancement techniques for fog-affected ...download.xuebalib.com/15qwvK5fQ80M.pdf · provides a brief review of fog image modelling. Section 3 describes the proposed

resulting image obtained from the conventional DCP process. Asshown in the figure, the transmission map can be seen in blockform, and the halo phenomenon occurs partially at the edges of thetrees. Fig. 1c is the refined transmission map and resulting imageobtained using a guided filter. The refined transmission map ismore precise, and the resulting image shows almost no halophenomenon. Fig. 1d is the modified transmission map andresulting image obtained using the proposed method. As shown inthe figure, transmission map ( Fig. 1d) is similar to refinedtransmission map ( Fig. 1c). The resulting image is almost thesame, and the halo phenomenon barely appears. Thus, the proposedmethod can decrease the halo phenomenon with less calculationand improve the effects of fog removal. Therefore, the proposedmethod can be described as simple and effective.

3.2 CLAHE-DWT method for enhancing images

CLAHE is a classic local contrast enhancement technique that canenhance the local details of an image. However, it can alsointroduce over-enhancement and noise-production problems tosome portions of an image. To overcome these issues, the CLAHE-DWT method is proposed in this section, which combines CLAHEwith DWT.

DWT method has been widely used in the image compressionfield and has been applied broadly in image processing in recentyears. DWT decomposes an image into a multi-resolution sub-bandstructure through a two-channel filter bank. The output imagesfrom these two-dimensional (2D) decompositions are referred to asapproximations of the input image, vertical detail, horizontal detail,and diagonal detail sub-band. When using a DWT decomposition,the image is decomposed into low-frequency and high-frequency

components. Here, an approximation of the image serves as thelow-frequency component, while the horizontal, vertical, anddiagonal detail components are the high-frequency components. Asharpening filter is used for the low-frequency image. A noise-removing filter is applied to the high-frequency components. Animproved image is reconstructed using an inverse DWT process.

The CLAHE-DWT method is applied to enhance the DCPresult image. The procedure for the CLAHE-DWT algorithm is asfollows. First, The RGB image is converted into an HSV image.Second, CLAHE is applied to the V-channel of the HSV image,after which the image is re-converted to RGB. Third, 2D DWT isapplied to the same V-channel of the HSV image, after which theimage is re-converted to RGB. In this study, DWT was carried outwith a Daubechies transform. Noise level and threshold value areobtained by global threshold method, and noise is removed by softthresholding. Finally, through the CLAHE-DWT procedure, wecan obtain two enhancement images: a contrast improved imageand a sharpened image.

3.3 Proposed method

The method proposed in this study is summarised in Fig. 2. First, afog-removal image is obtained by applying a DCP algorithm to aninput image. At this time, a dehazed image is produced using amodified transmission map. Second, a contrast improved imageand a sharpened image are obtained by applying a CLAHE-DWTmethod. Finally, an enhanced image is produced by fusing the twoimproved images. The weights are imposed on the two imagesobtained via the CLAHE-DWT process, and they are fused toobtain one enhanced image. In this case, images of CLAHE andDWT are assigned a weight value between 0 and 1. The sum of thetwo weights is 1.

Fig. 3 shows an image processing step of proposed method.Fig. 3a is the input image, and Fig. 3b is a dehazed image producedusing the DCP algorithm with modified transmission. Figs. 3c andd show images obtained from the CLAHE-DWT process. Figs. 3eand f are enhanced images obtained by fusing the CLAHE-DWTimages. Fig. 3e shows a mixture of CLAHE image with smallerweight than the DWT image, and Fig. 3f shows an image obtainedby weighting each image with a weight of 0.5. Colour and visibilityhave been improved compared to Figs. 3a and b.

3.4 Image quality assessment

The image quality assessment criterion can be divided into threecategories: full-reference image quality assessment [18], reduced-reference image quality assessment [19], and no-reference imagequality assessment [20, 26]. The full-reference and reduced-reference image quality assessments need a clear imagecorresponding to the foggy image to act as the reference image.This is hard to be satisfied in real applications unless there is asynthetic foggy image. Thus, in the field of image defogging, theno-reference assessment is widely used. In order to verify theperformance of the proposed fog removal algorithm, fog removalperformance is compared with the conventional fog removalalgorithm for objective and subjective evaluation of variousimages.

First, a good image quality assessment method needs tocompare the effect of visibility, colour restoration, and imagestructure similarity of different defogging algorithms. There arevarious indexes that can be used to compare the visibility ofimages. Among the various evaluation indicators, we chose blindevaluation indicators such as e, r̄, σ. The first indicator e denotesthe increased rate of visible edges after image defogging, as shownin the following equation:

e = nr − nono

(7)

where no and nr denote, respectively, the cardinal numbers of theset of visible edges in the original image Io and in the contrast-restored image Ir. The value of e evaluates the ability of the

Fig. 1  Transmission maps and resulting images obtained using variousmethods(a) Input image, (b) Transmission map and resulting image using DCP, (c) Refinedtransmission map and resulting image using a guide filter, (d) Modified transmissionmap and resulting image

IET Image Process., 2018, Vol. 12 Iss. 4, pp. 465-471© The Institution of Engineering and Technology 2017

467

Page 4: Effective image enhancement techniques for fog-affected ...download.xuebalib.com/15qwvK5fQ80M.pdf · provides a brief review of fog image modelling. Section 3 describes the proposed

method to restore edges which were not visible in Io but are in Ir.The higher the e, the larger the degree of visibility improvement.

The second indicator r̄ uses the enhanced degree of imagegradients to represent the restoration degree of the image edge andtexture information. This is an indicator of the degree of averagevisibility improvement, as shown in the following equation:

r̄ = exp 1nr

∑i = 1

nr

log ri ,

r = VLr/VLo, VL = ΔLactualΔLthreshold

(8)

where VLr denotes the visibility level of the considered object inthe restored image and VLo the visibility level of the consideredobject in the original image. ΔL denotes the difference inluminance between target and background. ΔLthreshold denotes thethreshold luminance difference between target and background andcan be estimated using Adrian's empirical target visibility model[21]. The computation of r enables to compute the gain of visibilitylevel produced by a contrast restoration method. The higher the r̄,the larger the degree of visibility improvement.

The third indicator σ denotes the rate of the saturated pixelsafter image defogging, as shown in the following equation:

σ = nsdimx × dimy

(9)

where dimx and dimy denote, respectively, the width and the heightof the image. ns is the number of pixels which are saturated (blackor white) after applying the contrast restoration but were notbefore. The smaller the σ, the better the result of the defoggingalgorithm.

Second, we used image feature points and processing time toassess the quality of both fog images and dehazed images. Imagefeature point belong to the image itself and contain the imagecharacteristics, it expresses the whole image information in theform of the minimum variable. With the use of the image featurepoint to express the whole image, which can reduce the influenceof grey gradation of image, and also the noise of frequency shift, it

can effectively keep the images characteristics out by any externalinterference. When tracking an object in an image or matching animage, the most common methods extract feature points from theimage. In this paper, we used scale-invariant feature transform(SIFT) feature extraction operator. The SIFT algorithm utilises arobust local feature descriptor that is invariant to image size orrotation and has been applied in areas such as object recognition,motion tracking, and 3D image reconstruction [22–24].

4 Results and analysisIn this section, some classical single image defogging algorithmswere compared with the proposed algorithm, as shown in Figs. 4–7. Figs. 4–6 are outdoor fog images, Fig. 7 is indoor fog image.Fig. 4 is a foggy image of New York. Figs. 5 and 6 show the foggyroad images. We used the Foggy Road Image Database (FRIDA)[25]. FRIDA is comprised of 90 synthetic images of 18 urban roadscenes. Each image is 640 × 480 pixels. Each image is associatedwith four foggy images. Different types of fog are added to each ofthe four associated images: uniform fog (U080), heterogeneous fog(K080), cloudy fog (L080), and cloudy heterogeneous fog (M080).Figs. 5 and 6 show the No. 10 images from the sets of FRIDA 1.Fig. 7 is obtained image in the small indoor experimental set. Asmall set of experiments was made to obtain indoor fog images,and an artificial fog generating device was used with projectedartificial light. The distance between the camera and the object wasabout 2 m, and the image was obtained in very dense fogconditions with visibility of only 2 m or so.

Fig. 4 shows a fog image and various dehazed images. Fig. 4ashows the original fog image. Figs. 4b–e show images dehazed byconventional single image defogging methods. Fig. 4f reveals theresult from the proposed method. Fig. 4c used NBPC (no-black-pixel constraint) instead of NBPCPA (no-black-pixel constraintcombined with planar assumption) among Tarel's algorithms. In thecase of NBPCPA, the results are good, but parameter adjustment isrequired for each image. Fig. 4d shows the result of applying thefast guided filter to the DCP of the He algorithm. As shown inFig. 4f, the resulting image from the proposed method has clearborders and good colour even for buildings compared with theother results. The red arrows point to areas with betterenhancement.

Figs. 5 and 6 show the results of the conventional method andthe proposed method for the uniform fog image (U10) and the

Fig. 2  Process of proposed method

Fig. 3  Image processing step of proposed method(a) Original, (b) DCP, (c) CLAHE, (d) 2D DWT, (e) Fusion 1, (f) Fusion 2

468 IET Image Process., 2018, Vol. 12 Iss. 4, pp. 465-471© The Institution of Engineering and Technology 2017

Page 5: Effective image enhancement techniques for fog-affected ...download.xuebalib.com/15qwvK5fQ80M.pdf · provides a brief review of fog image modelling. Section 3 describes the proposed

heterogeneous fog image (K10). Figs. 5a and 6a show the originalfog images.

The resulting images from the proposed method are shown inFigs. 5d and 6d and the red arrows point to areas with notableenhancement. In Figs. 5d and 6d, the improvement in visibility andcolour is obvious, including buildings and trees in the background.Figs. 5e and f show the feature points for Figs. 5a and d. Figs. 6eand f show the feature points for Figs. 6a and d. Figs. 5f and 6fshow more feature points than Figs. 5e and 6e.

Fig. 7 shows the fog removal results for the indoor image. InFig. 7d, the resulting image of the proposed method showed betterenhancement including the top of the machine and the boundary ofthe inner wall. Red arrows point to those parts. Figs. 7e and f showthe feature points for Figs. 7a and d. Fig. 7f shows many featurepoints, but Fig. 7e shows very few feature points. In Fig. 7, theindoor results were not as good as the outdoor results. The fogremoval and contrast improvement indoors were lesser than

outdoors. Although there are many reasons behind this, the densityof the artificial fog and light were assumed to be the main factors.Therefore, it was assumed that these factors impacted the effect offog removal indoors. In this way, the existing methods and theproposed method are used to remove fog in a dark indoorenvironment, but the effect is insignificant. Therefore, furtheranalysis and application of algorithms will be needed in the future.

Table 1 shows the results of quality assessment for the dehazedimages from Figs. 4–7. In Table 1, all images showed qualityimprovement after fog removal. The first blind assessment

Fig. 4  Fog-affected and dehazed images(a) Original, (b) CLAHE, (c) Tarel, (d) He, (e) Meng, (f) Proposed

Fig. 5  Dehaze results for outdoor foggy image (U10)(a) Original, (b) Tarel, (c) He, (d) Proposed, (e) Original + features, (f) Proposed + features

Fig. 6  Dehaze results for outdoor foggy image (K10)(a) Original, (b) Tarel, (c) He, (d) Proposed, (e) Original + features, (f) Proposed + features

Fig. 7  Dehaze results for indoor foggy image(a) Original, (b) Tarel, (c) He, (d) Proposed, (e) Original + features, (f) Proposed + features

IET Image Process., 2018, Vol. 12 Iss. 4, pp. 465-471© The Institution of Engineering and Technology 2017

469

Page 6: Effective image enhancement techniques for fog-affected ...download.xuebalib.com/15qwvK5fQ80M.pdf · provides a brief review of fog image modelling. Section 3 describes the proposed

indicator, e, showed that the Tarel method is the highest in manyimages and the proposed method is the next highest. The secondblind assessment indicator, r̄, demonstrated that the proposedmethod and the Tarel method showed high values in almost allimages and their values were similar. The third blind assessmentindicator, σ, is low in all methods. The average and standarddeviation of each of e, r̄, and σ were calculated. When we saw theaverage and standard deviation, Tarel's method shows the bestresults. Although the Tarel method has high values in most images,the proposed method also has high values. The proposed methodwas highly evaluated in terms of contrast and visibility insubjective evaluation than Tarel. We additionally tested a numberof images and obtained results similar to those shown in Table 1.

Table 1 also shows the image feature points and the comparisonof the computation efficiency of some defogging algorithms. It canbe seen that the proposed method has the highest image featurepoints and is faster than other algorithms. Here, the method of Heet al. shows the time when the refinement process is performedusing a fast guided filter. The time stated in the bracket indicatesthe processing time of the DCP algorithm alone, excluding therefinement process. The DCP calculation time of the proposedmethod is approximately three to four times faster than theconventional DCP method. Therefore, the proposed modifiedtransmission algorithm is both very fast and efficient. The proposedmethod stated in this paper is generally effective for fog removal;however, this is not the case for some images, a dark fog imagesand dense fog images. From the above tables and figures, it can beseen that the quality assessment indexes are not absolutelyconsistent with subjective evaluation; however, these indexes canbe used as a reference for comparison of different defoggingalgorithms.

The experiments here were carried out using Matlab for theconventional methods and the proposed method. The source codesof the conventional algorithms were provided by their respectiveauthors. A notebook with a 2.4 GHz Intel Core i5 CPU was used.

5 ConclusionIn this study, we have improved visibility and contrast in a varietyof indoor and outdoor fog images using DCP and CLAHE-DWTalgorithms. First, we obtained fast processing speed and cleandehazed image through DCP algorithm using modifiedtransmission map. Second, we obtained improved contrast andsharp images through the CLAHE-DWT method. Finally, anenhanced image was produced by synthesising two improvedimages. To demonstrate the effectiveness of the proposed method,we adopted an image quality assessment, feature points, andcomputation time. Good results for various indoor and outdoor fogimages were obtained. It is expected that real-time applications willbe possible through software and hardware optimisation in future.In the future, we plan to pursue more research on dark fog images.

6 References[1] Land, E.H., McCann, J.J.: ‘Lightness and retinex theory’, J. Opt. Soc. Am.,

1971, 61, (1), pp. 1–11[2] Rahman, Z.-U., Woodell, G.A., Jobson, D.J.: ‘A comparison of the multiscale

retinex with other image enhancement techniques’. Proc. IS&T 50thAnniversary Conf., May 1997, pp. 1–6

[3] Pizer, S.M., Amburn, E.P., Austin, J.D., et al.: ‘Adaptive histogramequalization and its variations’, Comput. Vis. Graph. Image Process., 1987,39, (3), pp. 355–368

[4] Alex Stark, J.: ‘Adaptive image contrast enhancement using generalizationsof histogram equalization’, IEEE Trans. Image Process., 2000, 9, (5), pp.889–896

[5] Lidong, H., Wei, Z., Jun, W., et al.: ‘Combination of contrast limited adaptivehistogram equalization and discrete wavelet transform for imageenhancement’, IET Image Process., 2015, 9, (10), pp. 908–915

[6] Schechner, Y.Y., Narasimhan, S.G., Nayar, S.K.: ‘Instant dehazing of imagesusing polarization’. Proc. Computer Vision and Pattern Recognition, 2001,vol. 1, pp. 325–332

[7] Narasimhan, S.G., Nayar, S.K.: ‘Chromatic framework for vision in badweather’. IEEE Conf. on Computer Vision and Pattern Recognition, 2000,vol. 1, pp. 598–605

[8] Kopf, J., Neubert, B., Chen, B., et al.: ‘Deep photo: model-based photographenhancement and viewing’, ACM Trans. Graph., 2008, 27, (5), pp. 116:1–116:10

[9] Tan, R.: ‘Visibility in bad weather from a single image’. Proc. ComputerVision and Pattern Recognition, June 2008, pp. 1–8

[10] Fattal, R.: ‘Single image dehazing’, ACM Trans. Graph., 2008, 27, (3), pp. 1–9

[11] Tarel, J.P., Hautiere, N.: ‘Fast visibility restoration from a single color or graylevel image’. Proc. of IEEE Int. Conf. on Computer Vision, Kyoto, Japan,2009, pp. 2201–2208

[12] He, K., Sun, J., Tang, X.: ‘Single image haze removal using dark channelprior’, IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (12), pp. 2341–2353

[13] Levin, A., Lischinski, D., Weiss, Y.: ‘A closed form solution to natural imagematting’. Conf. on Computer Vision and Pattern Recognition (CVPR), June2007

[14] Yang, S., Zhu, Q., Wang, J., et al.: ‘An improved single image haze removalalgorithm based on dark channel prior and histogram specification’. Proc. ofthe 3rd Int. Conf. on Multimedia Technology, 2013, pp. 279–292

[15] Tomasi, C., Manduchi, R.: ‘Bilateral filtering for gray and color images’. Int.Conf. Computer Vision, January 1998, pp. 839–846

[16] He, K., Sun, J., Tang, X.: ‘Guided image filtering’, IEEE Trans. Pattern Anal.Mach. Intell., 2013, 35, (6), pp. 1397–1409

[17] Xu, Y., Wen, J., Fei, L., et al.: ‘Review of video and image defoggingalgorithms and related studies on image restoration and enhancement’, IEEEAccess, 2016, 4, pp. 165–188, doi: 10.1109/ ACCESS.2015.2511558

[18] Wang, Z., Bovik, A.C., Sheikh, H.R., et al.: ‘Image quality assessment: fromerror visibility to structural similarity’, IEEE Trans. Image Process., 2004, 13,(4), pp. 600–612

[19] Carnec, M., Le Callet, P., Barba, D.: ‘Objective quality assessment of colorimages based on a generic perceptual reduced reference’, Signal Process.Image Commun., 2008, 23, (4), pp. 239–256

[20] Sheikh, H.R., Bovik, A.C., Cormack, L.: ‘No-reference quality assessmentusing natural scene statistics: JPEG2000’, IEEE Trans. Image Process., 2005,14, (1), pp. 1918–1927

[21] Adrian, W.: ‘Visibility of targets: model for calculation’, Light Res. Technol.,1989, 21, (4), pp. 181–188

[22] Tuytelaars, T., Mikolajczyk, K.: ‘Local invariant feature detectors: a survey’,Found. Trends Comput. Graph. Vis., 2007, 3, (3), pp. 177–280

[23] Lowe, D.: ‘Object recognition from local scale-invariant features’. Int. Conf.on Computer Vision, 1999, pp. 1150–1157

Table 1 Comparison of quality assessment for dehazed imagesFigure (size) Method e r̄ σ, % Feature points Processing time, sFig. 4 (767 × 576) Tarel 12.07 1.95 0.00 9788 6.01

He 5.79 1.41 0.11 7732 4.39 (4.20)Proposed 5.82 1.74 0.11 10,273 1.90 (1.09)

Fig. 5 (640 × 480) Tarel 20.81 2.72 0.00 878 3.09He 11.99 1.75 0.31 695 4.11 (3.67)

Proposed 21.08 2.77 0.31 1566 1.52 (0.72)Fig. 6 (640 × 480) Tarel 19.97 2.47 0.00 1427 3.09

He 10.51 1.51 0.29 1023 4.11 (3.67)Proposed 17.00 2.28 0.29 1691 1.52 (0.72)

Fig. 7 (780 × 580) Tarel 40.64 4.59 0.00 278 6.47He 6.30 1.16 0.00 20 4.99 (4.47)

Proposed 38.99 4.85 0.00 430 2.17 (1.11)Ave. (std. dev.) Tarel 23.37 (12.17) 2.93 (1.15) 0 (0) 3093 4.67

He 8.65 (3.07) 1.46 (0.24) 0.18 (0.15) 2368 4.4Proposed 20.72 (13.78) 2.91 (1.36) 0.18 (0.15) 3276 1.78

470 IET Image Process., 2018, Vol. 12 Iss. 4, pp. 465-471© The Institution of Engineering and Technology 2017

Page 7: Effective image enhancement techniques for fog-affected ...download.xuebalib.com/15qwvK5fQ80M.pdf · provides a brief review of fog image modelling. Section 3 describes the proposed

[24] Lowe, D.: ‘Distinctive image features from scale-invariant keypoints’, Int. J.Comput. Vis., 2004, 60, (2), pp. 91–110

[25] Foggy Road Image Database (FRIDA)

[26] Hautiere, N., Tarel, J., Aubert, D., et al.: ‘Blind contrast enhancementassessment by gradient rationing at visible edges’, Image Anal. Stereol. J.,2008, 27, (2), pp. 1–7

IET Image Process., 2018, Vol. 12 Iss. 4, pp. 465-471© The Institution of Engineering and Technology 2017

471

Page 8: Effective image enhancement techniques for fog-affected ...download.xuebalib.com/15qwvK5fQ80M.pdf · provides a brief review of fog image modelling. Section 3 describes the proposed

本文献由“学霸图书馆-文献云下载”收集自网络,仅供学习交流使用。

学霸图书馆(www.xuebalib.com)是一个“整合众多图书馆数据库资源,

提供一站式文献检索和下载服务”的24 小时在线不限IP

图书馆。

图书馆致力于便利、促进学习与科研,提供最强文献下载服务。

图书馆导航:

图书馆首页 文献云下载 图书馆入口 外文数据库大全 疑难文献辅助工具