Skip to main content
Log in

Haze Removal via Edge Weighted Pixel-to-Patch Fusion

  • Published:
Mobile Networks and Applications Aims and scope Submit manuscript

Abstract

One of the key issues in image dehazing is how to accurately estimate the transmission map using strong priors or assumptions. By far the most common prior adopted by existing haze removal approaches is dark channel prior (DCP). Despite the remarkable progress, the existing DCP-based methods induce misestimation of the transmission while preventing halo artifacts, and may cause distortions of the recovered haze-free images in both chromaticity and contrast. This because they fail to observe the fact that the edges in a color image do not necessarily correspond to the changes in depth; instead, they try to preserve all types of edges indiscriminately in the estimated transmission map, which does not accord with the property of a realistic transmission map. In order to address this issue, we propose an efficient method for haze removal under the guidance of the depth edges. The main contribution of our study is that we present a depth edge prior to obtain the depth edges from the hazy image, and then employ a pixel-to-patch fusion scheme weighted by the depth edges to estimate the transmission directly, which can preserve the sharp discontinuity at depth edges but smooth the surface texture in the rest regions of the transmission map. The experimental results show that our approach can obtain more accurate estimation of the transmission, and consequently restore better quality haze-free images in terms of less color distortion and high contrast.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Oakley JP, Satherley BL (1998) Improving image quality in poor visibility conditions using a physical model for contrast degradation. IEEE Trans Image Process 7(2):167–179

    Article  Google Scholar 

  2. Narasimhan SG, Nayar SK (2003) Interactive (de) weathering of an image using physical models. In: IEEE Workshop on Color and Photometric Methods in Computer Vision, pp 1387–1394

  3. Schechner YY, Narasimhan SG, Nayar SK (2003) Polarization-based vision through haze. Appl Opt 42(3):511–525

    Article  Google Scholar 

  4. Shwartz S, Namer E, Schechner Y Y (2006) Blind Haze Separation. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol 2, pp 1984–1991

  5. Narasimhan S G, Nayar S K (2000) Chromatic framework for vision in bad weather. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol 1, pp 598–605

  6. Tan RT (2008) Visibility in bad weather from a single image. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp 1–8

  7. Fattal R (2008) Single image dehazing. ACM Trans Graph 27(3):1–9

    Article  Google Scholar 

  8. Kratz L, Nishino K (2009) Factorizing scene albedo and depth from a single foggy image. IEEE International Conference on Computer Vision, vol 30:1701–1708

    Google Scholar 

  9. He K, Sun J, Tang X (2011) Single image haze removal using dark channel prior. IEEE Transactions on Pattern Analysis & Machine Intelligence 33(12):2341–2353

    Article  Google Scholar 

  10. Tarel JP, Hautiere N (2009) Fast visibility restoration from a single color or gray level image. IEEE International Conference on Computer Vision, vol 30:2201–2208

    Google Scholar 

  11. He K, Sun J, Tang X (2013) Guided image filtering. IEEE Transactions on Pattern Analysis & Machine Intelligence 35(6):1397–1409

    Article  Google Scholar 

  12. Bui TM, Tran HN, Kim W et al (2014) Segmenting dark channel prior in single image dehazing. Electron Lett 50(7):516–518

    Article  Google Scholar 

  13. Hsieh CH, Lin YS, Chang CH (2014) Haze removal without transmission map refinement based on dual dark channels. International Conference on Machine Learning and Cybernetics, vol 2:512–516

    Google Scholar 

  14. Shiau YH, Chen PY, Yang HY et al (2014) Weighted haze removal method with halo prevention. J Vis Commun Image Represent 25(2):445–453

    Article  Google Scholar 

  15. Yu T, Riaz I, Piao J et al (2015) Real-time single image dehazing using block-to-pixel interpolation and adaptive dark channel prior. IET Image Process 9(9):725–734

    Article  Google Scholar 

  16. Tang K, Yang J, Wang J (2014) Investigating haze-relevant features in a learning framework for image dehazing. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp 2995–3002

  17. Li Z, Zheng J (2015) Edge-preserving decomposition based single image haze removal. IEEE Trans Image Process 24(12):1–1

    Article  MathSciNet  Google Scholar 

  18. Li Y, Miao Q, Song J et al (2016) Single image haze removal based on haze physical characteristics and adaptive sky region detection. Neurocomputing 182:221–234

    Article  Google Scholar 

  19. McCartney EJ (1976) Optics of the atmosphere: scattering by molecules and particles. John Wiley and Sons, New York

    Google Scholar 

  20. Xiaowei Deng, Xiaolin Wu (2015) Sparsity-based depth image restoration using surface priors and rgb-d correlations. In: IEEE International Conference on Image Processing( ICIP), pp 3881–3885

  21. Kim JH, Jang WD, Sim JY et al (2013) Optimized contrast enhancement for real-time image and video dehazing. J Vis Commun Image Represent 24(3):410–425

    Article  Google Scholar 

  22. Jeong S, Lee S (2013) The single image dehazing based on efficient transmission estimation. In: IEEE International Conference on Consumer Electronics (ICCE). Las Vegas, pp 376–377

  23. Wong E (2006) A New Method for Creating a Depth Map for Camera Auto Focus Using an All in Focus Picture and 2D Scale Space Matching. In: International Conference on Acoustics, vol 3, pp III-III

  24. Huang YS, Cheng FH, Liang YH (2008) Creating depth map from 2D scene classification. International Conference on Innovative Computing Information and Control, vol 54:285–291

    Google Scholar 

Download references

Acknowledgements

This work was jointly supported by the National Natural Science Foundation of China (No. 61301291); the 111 Project (B08038); Shaanxi province science and technology innovation team project (2013KCT-02). The authors would also like to thank Prof. Xiaolin Wu for his helpful comments regarding the improvement of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yunsong Li.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, K., Zhang, S. & Li, Y. Haze Removal via Edge Weighted Pixel-to-Patch Fusion. Mobile Netw Appl 22, 464–477 (2017). https://doi.org/10.1007/s11036-017-0869-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11036-017-0869-y

Keywords

Navigation