Skip to main content
Log in

Two-layer pyramid-based blending method for exposure fusion

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

Multi-exposure fusion is a technique used to generate a high-dynamic-range image without calculating the camera response function and without compressing its ranges with the tone mapping process. There are many schemes for fusing multi-exposure images. One of the famous schemes is the pyramid-based blending, which fuses multi-exposure images together based on the concept of multi-resolution blending. The computational time of this method is fast, and the resulting image is satisfying. However, in some cases, the pyramid-based blending method has a trade-off between preserving local areas and the quality in terms of smoothness boundaries. To solve this trade-off, we propose a new pyramid-based blending scheme, called the two-layer pyramid-based blending method for multi-exposure fusion. We found that, for some sets of input images, we have to generate a virtual photograph and add it into those sets. Thus, we first propose a criterion for adding the virtual photograph. Then, we construct a concatenation of two pyramid-based blending methods, in which the resulting images from different levels of the first-layer pyramid-based blending method are used as inputs of the second-layer one. The test results showed that the resulting images were satisfying, and their objective evaluation scores regarding the perceptual image quality and detail-preserving assessment were high, even though they are not highest, compared with those of methods used in our experiments. Also, there is one advantage of the proposed method compared with the others. That is, the proposed method could preserve details in some areas, whereas the other methods could not.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Debevec, P.E., Malik, J.: Recovering high dynamic range radiance maps from photographs. In: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’97, ACM Press/Addison-Wesley Publishing Co., New York, pp. 369–378 (1997). https://doi.org/10.1145/258734.258884

  2. Banterle, F., Artusi, A., Debattista, K., Chalmers, A.: Advanced High Dynamic Range Imaging: Theory and Practice, AK Peters. CRC Press, Natick (2011)

  3. Mertens, T., Kautz, J., Reeth, F.V.: Exposure fusion. In: Computer Graphics and Applications, 2007. PG ’07. 15th Pacific Conference on, pp. 382–390 (2007). https://doi.org/10.1109/PG.2007.17

  4. Goshtasby, A.A.: Fusion of multi-exposure images. Image Vis. Comput. 23(6), 611–618 (2005)

    Article  Google Scholar 

  5. Raman, S., Chaudhuri, S.: Bilateral filter based compositing for variable exposure photography. In: Eurographics (Short Papers), pp. 1–4 (2009)

  6. Li, S., Kang, X., Hu, J.: Image fusion with guided filtering. IEEE Trans. Image Process. 22(7), 2864–2875 (2013)

    Article  Google Scholar 

  7. Tomasi, C., Manduchi, R.: Bilateral filtering for gray and color images. In: Iccv, Vol. 98, p. 2 (1998)

  8. Li, S., Kang, X.: Fast multi-exposure image fusion with median filter and recursive filter. IEEE Trans. Consum. Electron. 58(2), 626–632 (2012)

    Article  Google Scholar 

  9. Gu, B., Li, W., Wong, J., Zhu, M., Wang, M.: Gradient field multi-exposure images fusion for high dynamic range image visualization. J. Vis. Commun. Image Represent. 23(4), 604–610 (2012)

    Article  Google Scholar 

  10. Wang, J., Xu, G., Lou, H.: Exposure fusion based on sparse coding in pyramid transform domain. In: Proceedings of the 7th International Conference on Internet Multimedia Computing and Service, ACM, p. 4 (2015)

  11. Yin, H., Li, Y., Chai, Y., Liu, Z., Zhu, Z.: A novel sparse-representation-based multi-focus image fusion approach. Neurocomputing 216, 216–229 (2016)

    Article  Google Scholar 

  12. Wang, J., Wang, W., Li, B., Xu, G., Zhang, R., Zhang, J.: Exposure fusion via sparse representation and shiftable complex directional pyramid transform. Multimed. Tools Appl. 76(14), 15755–15775 (2017)

    Article  Google Scholar 

  13. Ma, K., Li, H., Yong, H., Wang, Z., Meng, D., Zhang, L.: Robust multi-exposure image fusion: a structural patch decomposition approach. IEEE Trans. Image Process. 26(5), 2519–2532 (2017)

    Article  MathSciNet  Google Scholar 

  14. Li, Z.G., Zheng, J.H., Rahardja, S.: Detail-enhanced exposure fusion. IEEE Trans. Image Process. 21(11), 4672–4676 (2012)

    Article  MathSciNet  Google Scholar 

  15. Keerativittayanun, S., Kondo, T., Kotani, K., Phatrapornnant, T.: An innovative of pyramid-based fusion for generating the hdr images in common display devices. In: 2015 14th IAPR International Conference on Machine Vision Applications (MVA), pp. 53–56 (2015). https://doi.org/10.1109/MVA.2015.7153131

  16. Shen, J., Zhao, Y., Yan, S., Li, X.: Exposure fusion using boosting laplacian pyramid. IEEE Trans. Cybern. 44(9), 1579–1590 (2014). https://doi.org/10.1109/TCYB.2013.2290435

    Article  Google Scholar 

  17. Burt, P., Kolczynski, R.: Enhanced image capture through fusion. In: Fourth International Conference on Computer Vision, 1993. Proceedings, pp. 173–182 (1993)

  18. Yang, X., Lin, W., Lu, Z., Ong, E., Yao, S.: Motion-compensated residue preprocessing in video coding based on just-noticeable-distortion profile. IEEE Trans. Circuits Syst. Video Technol. 15(6), 742–752 (2005). https://doi.org/10.1109/TCSVT.2005.848313

    Article  Google Scholar 

  19. Liu, A., Lin, W., Paul, M., Deng, C., Zhang, F.: Just noticeable difference for images with decomposition model for separating edge and textured regions. IEEE Trans. Circuits Syst. Video Technol. 20(11), 1648–1652 (2010). https://doi.org/10.1109/TCSVT.2010.2087432

    Article  Google Scholar 

  20. Romaniak, P., Janowski, L., Leszczuk, M., Papir, Z.: A no reference metric for the quality assessment of videos affected by exposure distortion. In: 2011 IEEE International Conference on Multimedia and Expo, pp. 1–6 (2011). https://doi.org/10.1109/ICME.2011.6011903

  21. Mertens, T., Kautz, J., Van Reeth, F.: Exposure fusion: a simple and practical alternative to high dynamic range photography. In: Computer Graphics Forum, Vol. 28. Wiley Online Library, pp. 161–171 (2009)

  22. Ma, K., Zeng, K., Wang, Z.: Perceptual quality assessment for multi-exposure image fusion. IEEE Trans. Image Process. 24(11), 3345–3356 (2015). https://doi.org/10.1109/TIP.2015.2442920

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported under a grant in the SIIT-JAIST-NECTEC Dual Doctoral Program. The author would like to thank Mr. Veerachart Srisamosorn for providing some dataset.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Suthum Keerativittayanun.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Keerativittayanun, S., Kondo, T., Kotani, K. et al. Two-layer pyramid-based blending method for exposure fusion. Machine Vision and Applications 32, 48 (2021). https://doi.org/10.1007/s00138-021-01175-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00138-021-01175-9

Keywords

Navigation