Skip to main content
Log in

Inter-frame video image generation based on spatial continuity generative adversarial networks

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

This paper proposes a method for generating inter-frame video images based on spatial continuity generative adversarial networks (SC-GANs) to smooth the playing of low-frame rate videos and to clarify blurry image edges caused by the use of traditional methods to improve the video frame rate. Firstly, the auto-encoder is used as a discriminator and Wasserstein distance is applied to represent the difference between the loss distribution of the real sample and the generated sample, instead of the typical method of generative adversarial networks to directly match data distribution. Secondly, the hyperparameter between generator and discriminator is used to stabilize the training process, which effectively prevents the model from collapsing. Finally, taking advantage of the spatial continuity of the image features of continuous video frames, an optimal value between two consecutive frames is found by Adam and then mapped to the image space to generate inter-frame images. In order to illustrate the authenticity of the generated inter-frame images, PSNR and SSIM are adopted to evaluate the inter-frame images, and the results show that the generated inter-frame images have a high degree of authenticity. The feasibility and validity of the proposed method based on SC-GAN are also verified.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. Paramkusam, A.V., Reddy, V.S.K.: Multilayer reference frame motion estimation for video coding. Signal Image Video Process. 9(8), 1851–1860 (2015)

    Article  Google Scholar 

  2. Purwar, R.K., Prakash, N., Rajpal, N.: A matching criterion for motion compensation in the temporal coding of video signal. Signal Image Video Process. 5(2), 133–139 (2011)

    Article  Google Scholar 

  3. Ha, T., Lee, S., Kim, J.: Motion compensated frame interpolation by new block-based motion estimation algorithm. IEEE Trans. Consum. Electron. 50(2), 752–759 (2004)

    Article  Google Scholar 

  4. Dikbas, S., Altunbasak, Y.: Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation. IEEE Trans. Image Process. 22(8), 2931–2945 (2013)

    Article  MathSciNet  Google Scholar 

  5. Jeong, S.G., Lee, C., Kim, C.S.: Motion-compensated frame interpolation based on multi-hypothesis motion estimation and texture optimization. IEEE Trans. Image Process. 22(11), 4497–4509 (2013)

    Article  MathSciNet  Google Scholar 

  6. Wei, C.: Frame rate up conversion algorithm research based on motion estimation and motion compensation. Doctoral dissertation (2016)

  7. Taşdemir, K., Çetin, A.E.: Content-based video copy detection based on motion vectors estimated using a lower frame rate. Signal Image Video Process. 8(6), 1049–1057 (2014)

    Article  Google Scholar 

  8. Qi, G.: Research and application of image generation based on deep learning. Doctoral dissertation (2017)

  9. Park, K., Yu, S., Park, S., Lee, S., Paik, J.: An optimal low dynamic range image generation method using a neural network. IEEE Trans. Consum. Electron. 64(1), 69–76 (2018)

    Article  Google Scholar 

  10. Jingxuan, H., Yao, Z., Chunyu, L., Meiqin, L., Huihui, B.: CNN-based frame rate up-conversion algorithm. Appl. Res. Comput. 35(2), 611–614 (2018)

    Google Scholar 

  11. Gucan, L., Xiaohu, Z., Qifeng, Y.: Deep convolutional neural network for motion compensated frame interpolation. J. Natl. Univ. Def. Technol. 38(5), 143–148 (2016)

    Google Scholar 

  12. Goodfellow, I.: Nips 2016 tutorial: generative adversarial networks. arXiv preprint arXiv:1701.00160 (2016)

  13. Kunfeng, W., Chao, G., Yanjie, D.: Generative adversarial networks: the state of the art and beyond. Acta Autom. Sin. 43(3), 321–332 (2017)

    Google Scholar 

  14. Mathieu, M., Couprie, C., Lecun, Y.: Deep multi-scale video prediction beyond mean square error. Electr. Eng. Syst. Sci. 32(24), 1091–1105 (2017)

    Google Scholar 

  15. Berthelot, D., Schumm, T., Metz, L.: BEGAN: boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717 (2017)

  16. Xueqi, C., Xiaolong, J., Yuanzhuo, W.: Survey on big data system and analytic technology. J. Softw. 25(9), 1889–1908 (2014)

    Google Scholar 

  17. Mirza, M., Osindero, S.: Conditional generative adversarial nets. computer. Science 10(21), 2672–2680 (2014)

    Google Scholar 

  18. Zhao, J., Mathieu, M., LeCun, Y.: Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126 (2016)

  19. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein gan. arXiv preprint arXiv:1701.07875 (2017)

  20. Tanchenko, A.: Visual-PSNR measure of image quality. J. Vis. Commun. Image Represent. 25(5), 874–878 (2014)

    Article  Google Scholar 

  21. Zhu, R., Zhou, F., Xue, J.H.: MvSSIM: a quality assessment index for hyperspectral images. Neurocomputing 272, 250–257 (2018)

    Article  Google Scholar 

  22. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tao Zhang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, T., Jiang, P. & Zhang, M. Inter-frame video image generation based on spatial continuity generative adversarial networks. SIViP 13, 1487–1494 (2019). https://doi.org/10.1007/s11760-019-01499-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-019-01499-0

Keywords

Navigation