Abstract
This paper proposes a method for generating inter-frame video images based on spatial continuity generative adversarial networks (SC-GANs) to smooth the playing of low-frame rate videos and to clarify blurry image edges caused by the use of traditional methods to improve the video frame rate. Firstly, the auto-encoder is used as a discriminator and Wasserstein distance is applied to represent the difference between the loss distribution of the real sample and the generated sample, instead of the typical method of generative adversarial networks to directly match data distribution. Secondly, the hyperparameter between generator and discriminator is used to stabilize the training process, which effectively prevents the model from collapsing. Finally, taking advantage of the spatial continuity of the image features of continuous video frames, an optimal value between two consecutive frames is found by Adam and then mapped to the image space to generate inter-frame images. In order to illustrate the authenticity of the generated inter-frame images, PSNR and SSIM are adopted to evaluate the inter-frame images, and the results show that the generated inter-frame images have a high degree of authenticity. The feasibility and validity of the proposed method based on SC-GAN are also verified.














Similar content being viewed by others
References
Paramkusam, A.V., Reddy, V.S.K.: Multilayer reference frame motion estimation for video coding. Signal Image Video Process. 9(8), 1851–1860 (2015)
Purwar, R.K., Prakash, N., Rajpal, N.: A matching criterion for motion compensation in the temporal coding of video signal. Signal Image Video Process. 5(2), 133–139 (2011)
Ha, T., Lee, S., Kim, J.: Motion compensated frame interpolation by new block-based motion estimation algorithm. IEEE Trans. Consum. Electron. 50(2), 752–759 (2004)
Dikbas, S., Altunbasak, Y.: Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation. IEEE Trans. Image Process. 22(8), 2931–2945 (2013)
Jeong, S.G., Lee, C., Kim, C.S.: Motion-compensated frame interpolation based on multi-hypothesis motion estimation and texture optimization. IEEE Trans. Image Process. 22(11), 4497–4509 (2013)
Wei, C.: Frame rate up conversion algorithm research based on motion estimation and motion compensation. Doctoral dissertation (2016)
Taşdemir, K., Çetin, A.E.: Content-based video copy detection based on motion vectors estimated using a lower frame rate. Signal Image Video Process. 8(6), 1049–1057 (2014)
Qi, G.: Research and application of image generation based on deep learning. Doctoral dissertation (2017)
Park, K., Yu, S., Park, S., Lee, S., Paik, J.: An optimal low dynamic range image generation method using a neural network. IEEE Trans. Consum. Electron. 64(1), 69–76 (2018)
Jingxuan, H., Yao, Z., Chunyu, L., Meiqin, L., Huihui, B.: CNN-based frame rate up-conversion algorithm. Appl. Res. Comput. 35(2), 611–614 (2018)
Gucan, L., Xiaohu, Z., Qifeng, Y.: Deep convolutional neural network for motion compensated frame interpolation. J. Natl. Univ. Def. Technol. 38(5), 143–148 (2016)
Goodfellow, I.: Nips 2016 tutorial: generative adversarial networks. arXiv preprint arXiv:1701.00160 (2016)
Kunfeng, W., Chao, G., Yanjie, D.: Generative adversarial networks: the state of the art and beyond. Acta Autom. Sin. 43(3), 321–332 (2017)
Mathieu, M., Couprie, C., Lecun, Y.: Deep multi-scale video prediction beyond mean square error. Electr. Eng. Syst. Sci. 32(24), 1091–1105 (2017)
Berthelot, D., Schumm, T., Metz, L.: BEGAN: boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717 (2017)
Xueqi, C., Xiaolong, J., Yuanzhuo, W.: Survey on big data system and analytic technology. J. Softw. 25(9), 1889–1908 (2014)
Mirza, M., Osindero, S.: Conditional generative adversarial nets. computer. Science 10(21), 2672–2680 (2014)
Zhao, J., Mathieu, M., LeCun, Y.: Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126 (2016)
Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein gan. arXiv preprint arXiv:1701.07875 (2017)
Tanchenko, A.: Visual-PSNR measure of image quality. J. Vis. Commun. Image Represent. 25(5), 874–878 (2014)
Zhu, R., Zhou, F., Xue, J.H.: MvSSIM: a quality assessment index for hyperspectral images. Neurocomputing 272, 250–257 (2018)
Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Zhang, T., Jiang, P. & Zhang, M. Inter-frame video image generation based on spatial continuity generative adversarial networks. SIViP 13, 1487–1494 (2019). https://doi.org/10.1007/s11760-019-01499-0
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11760-019-01499-0