Skip to main content
Log in

Adaptive feature fusion network based on boosted attention mechanism for single image dehazing

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Recently convolutional neural networks based methods have achieved significant improvements in image dehazing. However, these algorithms still face the challenge of producing haze-free images while preserving credible contrast and color fidelity. In this paper, we propose an adaptive feature fusion network to remove the haze and ensure realistic details from global and local perspectives. On the global scale, we learn compact feature representations by progressive downsampling, which can provide the overall information from the encoded high-level semantic context. Besides, dilated convolution is adopted to expand the receptive fields, which can effectively capture the contextual information and alleviate the details loss of resolution reduction. Correspondingly, the proposed method employs a local branch to enrich the feature representations and further emphasize the meaningful information for image details recovery. To this end, we design the residual dense attention block (RDAB) which encourages mid-level feature aggregation and persist memory by dense connections. Within the RDAB, a boosted attention mechanism (BAM) is presented to explicitly model the feature interdependencies between channels under different scales. Then, a weighting operation is conducted to balance the information flow received from these scales. Moreover, an adaptive weighted network is developed to achieve a good trade-off between the contributions of global and local information for semantical image dehazing. To take full account of quality evaluation, we use the L1 smooth loss and perceptual loss to reconsturct the dehazed images. Extensive evaluation demonstrates the superior performance of our method is 4db PSNR values and 0.1 SSIM value more than the related work while preserving credible contrast and color fidelity.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Ancuti C, Ancuti C, Timofte R, De Vleeschouwer C (2018) O-haze: a dehazing benchmark with real hazy and haze-free outdoor images. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 754–762

  2. Bahat Y, Irani M (2016) Blind dehazing using internal patch recurrence. In: IEEE International conference on computational photography, pp 1–9

  3. Bansal M, Kumar M, Kumar M, Kumar K (2021) An efficient technique for object recognition using shi-tomasi corner detection algorithm. Soft Comput 25(6):4423–4432

    Article  Google Scholar 

  4. Berman D, Treibitz T, Avidan S (2016) Non-local image dehazing. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp 1674–1682

  5. Breiman L (2001) Random forests. Mach Learn 45(1):5–32

    Article  Google Scholar 

  6. Chen S, Chen Y, Qu Y, Huang J, Hong M (2019) Multi-scale adaptive dehazing network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp 0–0

  7. Chhabra P, Garg NK, Kumar M (2020) Content-based image retrieval system using orb and sift features. Neural Comput Appl 32(7):2725–2733

    Article  Google Scholar 

  8. Dai C, Lin M, Wu X, Zhang D (2020) Single hazy image restoration using robust atmospheric scattering model. Signal Process 166:107257

    Article  Google Scholar 

  9. Dong Y, Liu Y, Zhang H, Chen S, Qiao Y (2020) Fd-gan: Generative adversarial networks with fusion-discriminator for single image dehazing. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp 10729–10736

  10. Garg D, Garg NK, Kumar M (2018) Underwater image enhancement using blending of clahe and percentile methodologies. Multimed Tools Appl 77 (20):26545–26561

    Article  Google Scholar 

  11. Gupta S, Kumar M, Garg A (2019) Improved object recognition results using sift and orb feature detector. Multimed Tools Appl 78(23):34157–34171

    Article  Google Scholar 

  12. Gupta S, Thakur K, Kumar M (2021) 2d-human face recognition using sift and surf descriptors of face’s feature regions. Vis Comput 37(3):447–456

    Article  Google Scholar 

  13. He K, Sun J, Tang X (2010) Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mach Intell 33(12):2341–2353

    Google Scholar 

  14. Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 7132–7141

  15. Jeong D, Kim BG, Dong SY (2020) Deep joint spatiotemporal network (djstn) for efficient facial expression recognition. Sensors 20(7):1936

    Article  Google Scholar 

  16. Kim JH, Kim BG, Roy PP, Jeong DM (2019) Efficient facial expression recognition algorithm based on hierarchical deep neural network structure. IEEE Access 7:41273–41285

    Article  Google Scholar 

  17. Kim TK, Paik JK, Kang BS (1998) Contrast enhancement system using spatially adaptive histogram equalization with temporal filtering. IEEE Trans Consum Electron 44(1):82–87

    Article  Google Scholar 

  18. Kumar A, Kumar M, Kaur A (2021) Face detection in still images under occlusion and non-uniform illumination. Multimed Tools Appl 80(10):14565–14590

    Article  Google Scholar 

  19. Kumar M, Chhabra P, Garg NK (2018) An efficient content based image retrieval system using bayesnet and k-nn. Multimed Tools Appl 77(16):21557–21570

    Article  Google Scholar 

  20. Ledig C, Theis L, Huszár F, Caballero J, Cunningham A, Acosta A, Aitken A, Tejani A, Totz J, Wang Z, Shi W (2017) Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 4681–4690

  21. Lei J, Li X, Peng B, Fang L, Ling N, Huang Q (2020) Deep spatial-spectral subspace clustering for hyperspectral image. IEEE Trans Circ Syst Video Technol:1–12. https://doi.org/10.1109/TCSVT.2020.3027616

  22. Li B, Peng X, Wang Z, Xu J, Feng D (2017) An all-in-one network for dehazing and beyond. arXiv:1707.06543

  23. Li B, Ren W, Fu D, Tao D, Feng D, Zeng W, Wang Z (2018) Benchmarking single-image dehazing and beyond. IEEE Trans Image Process 28(1):492–505

    Article  MathSciNet  Google Scholar 

  24. Li F, Bai H, Zhao Y (2020) Learning a deep dual attention network for video super-resolution. IEEE Trans Image Process 29:4474–4488

    Article  Google Scholar 

  25. Li K, Li Y, You S, Nick B (2017) Photo-realistic simulation of road scene for data-driven methods in bad weather. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp 491–500

  26. Liu X, Ma Y, Shi Z, Chen J (2019) Griddehazenet: Attention-based multi-scale network for image dehazing. In: Proceedings of the IEEE International Conference on Computer Vision, pp 7314–7323

  27. Mei K, Jiang A, Li J, Wang M (2018) Progressive feature fusion network for realistic image dehazing. In: Asian conference on computer vision, pp 203–215

  28. Meng G, Wang Y, Duan J, Xiang S, Pan C (2013) Efficient image dehazing with boundary constraint and contextual regularization. In: Proceedings of the IEEE International Conference on Computer Vision, pp 617–624

  29. Narasimhan SG, Nayar SK (2000) Chromatic framework for vision in bad weather. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition, pp 598–605

  30. Narasimhan SG, Nayar SK (2002) Vision and the atmosphere. Int J Comput Vis 48(3):233–254

    Article  Google Scholar 

  31. Pan Z, Yu W, Lei J, Ling N, Kwong S (2021) Tsan: Synthesized view quality enhancement via two-stream attention network for 3d-hevc. IEEE Trans Circ Syst Video Technol:1–14. https://doi.org/10.1109/TCSVT.2021.3057518

  32. Peng B, Lei J, Fu H, Jia Y, Zhang Z, Li Y (2021) Deep video action clustering via spatio-temporal feature learning. Neurocomputing:1–9. https://doi.org/10.1016/j.neucom.2020.05.123

  33. Qu Y, Chen Y, Huang J, Xie Y (2019) Enhanced pix2pix dehazing network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 8160–8168

  34. Raanan F (2014) Dehazing using color-lines. ACM Trans Graph 34 (1):1–14

    Google Scholar 

  35. Raikwar SC, Tapaswi S (2020) Lower bound on transmission using non-linear bounding function in single image dehazing. IEEE Trans Image Process 29:4832–4847

    Article  Google Scholar 

  36. Reda M, Zhao Y (2018) Haze removal methods: a comprehensive review. In: 2018 IEEE CSAA Guidance, navigation and control conference (CGNCC), pp 1–7. IEEE

  37. Ren W, Liu S, Zhang H, Pan J, Cao X, Yang MH (2016) Single image dehazing via multi-scale convolutional neural networks. In: European conference on computer vision, pp 154–169

  38. Ren W, Ma L, Zhang J, Pan J, Cao X, Liu W, Yang MH (2018) Gated fusion network for single image dehazing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 3253–3261

  39. Ren W, Pan J, Zhang H, Cao X, Yang MH (2020) Single image dehazing via multi-scale convolutional neural networks with holistic edges. Int J Comput Vis 128(1):240–259

    Article  Google Scholar 

  40. Santra S, Mondal R, Chanda B (2018) Learning a patch quality comparator for single image dehazing. IEEE Trans Image Process 27(9):4598–4607

    Article  MathSciNet  Google Scholar 

  41. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556

  42. Stark JA (2000) Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans Image Process 9(5):889–896

    Article  Google Scholar 

  43. Sulami M, Glatzer I, Fattal R, Werman M (2014) Automatic recovery of the atmospheric light in hazy images. In: IEEE International conference on computational photography, pp 1–11

  44. Tang K, Yang J, Wang J (2014) Investigating haze-relevant features in a learning framework for image dehazing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 2995–3000

  45. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. Adv Neural Inf Process Syst 30:5998–6008

    Google Scholar 

  46. Vinyals O, Toshev A, Bengio S, Erhan D (2015) Show and tell: a neural image caption generator. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 3156–3164

  47. Wang F, Jiang M, Qian C, Yang S, Li C, Zhang H, Wang X, Tang X (2017) Residual attention network for image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 3156–3164

  48. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  Google Scholar 

  49. Yeh CH, Huang CH, Kang LW (2019) Multi-scale deep residual learning-based single image haze removal via image decomposition. IEEE Trans Image Process 29:3153–3167

    Article  Google Scholar 

  50. Zhang H, Patel VM (2018) Densely connected pyramid dehazing network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 3194–3203

  51. Zhang Y, Ding L, Gaurav S (2017) Hazerd: an outdoor scene dataset and benchmark for single image dehazing. In: 2017 IEEE International conference on image processing, pp 3205–3209

  52. Zhu L, Fu CW, Brown MS, Heng PA (2017) A non-local low-rank framework for ultrasound speckle reduction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 5650–5658

Download references

Acknowledgements

This work was supported in part by National Natural Science Foundation of China (No. 61972023) and Fundamental Research Funds for the Central Universities (2019YJS031, 2019JBZ102).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huihui Bai.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Z., Li, F., Cong, R. et al. Adaptive feature fusion network based on boosted attention mechanism for single image dehazing. Multimed Tools Appl 81, 11325–11339 (2022). https://doi.org/10.1007/s11042-022-12151-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-022-12151-4

Keywords

Navigation