Skip to main content
Log in

Very deep fully convolutional encoder–decoder network based on wavelet transform for art image fusion in cloud computing environment

  • Original Paper
  • Published:
Evolving Systems Aims and scope Submit manuscript

Abstract

Big data video images contain a lot of information in the cloud computing environment. There are usually many images in the same scene, and the information description is not sufficient. The traditional image fusion algorithms have some defects such as poor quality, low resolution and information loss of the fused image. Therefore, we propose a very deep fully convolutional encoder–decoder network based on wavelet transform for art image fusion in the cloud computing environment. This proposed network is based on VGG-Net and designs the encoder sub-network and the decoder sub-network. The images to be fused are decomposed by the wavelet transform to obtain the low frequency sub-image and high frequency sub-image at different scale spaces. The different fusion schemes for low frequency sub-band coefficient and high frequency sub-band coefficient are given respectively. The structural similarity of the images before and after fusion is taken as the objective orientation. By introducing the weight factor of the local information in the image, the loss function suitable for the final fusion of the image is customized. The fusion image can take the effective information of the different input images into account. Compared with other state-of-the-art image fusion methods, the proposed image fusion has achieved significant improvement in both subjective visual experience and objective quantification indexes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Availability of data and materials

The data can be accessed from the corresponding authors.

References

Download references

Funding

No.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Juan Yang.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict for this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, T., Yang, J. Very deep fully convolutional encoder–decoder network based on wavelet transform for art image fusion in cloud computing environment. Evolving Systems 14, 281–293 (2023). https://doi.org/10.1007/s12530-022-09457-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12530-022-09457-x

Keywords

Navigation