Skip to main content

vi-MoCoGAN: A Variant of MoCoGAN for Video Generation of Human Hand Gestures Under Different Viewpoints

  • Conference paper
  • First Online:
Pattern Recognition (ACPR 2019)

Abstract

This paper presents a method for video generation under different viewpoints. The method gets inspired by MoCoGAN’s idea which modelled a video clip in two latent sub-spaces (content and motion) and achieved impressive results recently. However, MoCoGAN and most of existing methods of video generation did not take viewpoint into account so they cannot generate videos from a certain viewpoint, which is a need for data augmentation and advertisement applications. To this end, we propose to follow the idea of conditional GAN and introduce a new variable to control the generated video’s view. In addition, to keep the subject consistent during action implementation, we utilize an additional sub-network to generate the content control vector instead of using a random vector. Besides, the objective function for training the network will be modified to measure the similarity of content, action and view of the generated video with the truth one. Preliminary experiments are conducted for generating video clips of dynamic human hand gestures, showing the potential to generate videos under different viewpoints in the future.

This material is based upon work supported by the Air Force Office of Scientific Research under award number FA2386-17-1-4056.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling (2014). arXiv preprint arXiv:1412.3555

  2. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)

    Google Scholar 

  3. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization (2014). arXiv preprint arXiv:1412.6980

  4. Nguyen, D.H., Le, T.H., Tran, T.H., Vu, H., Le, T.L., Doan, H.G.: Hand segmentation under different viewpoints by combination of mask R-CNN with tracking. In: 2018 5th Asian Conference on Defense Technology (ACDT), pp. 14–20. IEEE (2018)

    Google Scholar 

  5. Rautaray, S.S., Agrawal, A.: Vision based hand gesture recognition for human computer interaction: a survey. Artif. Intell. Rev. 43(1), 1–54 (2015)

    Article  Google Scholar 

  6. Ruffieux, S., Lalanne, D., Mugellini, E., Abou Khaled, O.: A survey of datasets for human gesture recognition. In: Kurosu, M. (ed.) HCI 2014. LNCS, vol. 8511, pp. 337–348. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-07230-2_33

    Chapter  Google Scholar 

  7. Saito, M., Matsumoto, E., Saito, S.: Temporal generative adversarial nets with singular value clipping. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2830–2839 (2017)

    Google Scholar 

  8. Theis, L., Oord, A.V.D., Bethge, M.: A note on the evaluation of generative models (2015). arXiv preprint arXiv:1511.01844

  9. Tian, Y., Peng, X., Zhao, L., Zhang, S., Metaxas, D.N.: CR-GAN: learning complete representations for multi-view generation (2018). arXiv preprint arXiv:1806.11191

  10. Truong, D.M., Doan, H.G., Tran, T.H., Vu, H., Le, T.L.: Robustness analysis of 3D convolutional neural network for human hand gesture recognition. Int. J. Mach. Learn. Comput. 8(2), 135–142 (2019)

    Article  Google Scholar 

  11. Tulyakov, S., Liu, M.Y., Yang, X., Kautz, J.: MoCoGAN: decomposing motion and content for video generation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1526–1535 (2018)

    Google Scholar 

  12. Villegas, R., Yang, J., Hong, S., Lin, X., Lee, H.: Decomposing motion and content for natural video sequence prediction (2017). arXiv preprint arXiv:1706.08033

  13. Vondrick, C., Pirsiavash, H., Torralba, A.: Generating videos with scene dynamics. In: Advances in Neural Information Processing Systems, pp. 613–621 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thanh-Hai Tran .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 805 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tran, TH., Bach, VD., Doan, HG. (2020). vi-MoCoGAN: A Variant of MoCoGAN for Video Generation of Human Hand Gestures Under Different Viewpoints. In: Cree, M., Huang, F., Yuan, J., Yan, W. (eds) Pattern Recognition. ACPR 2019. Communications in Computer and Information Science, vol 1180. Springer, Singapore. https://doi.org/10.1007/978-981-15-3651-9_11

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-3651-9_11

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-3650-2

  • Online ISBN: 978-981-15-3651-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics