skip to main content
10.1145/3461615.3485400acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
short-paper

IdlePose : A Dataset of Spontaneous Idle Motions

Published:17 December 2021Publication History

ABSTRACT

When animating and giving life to a virtual character, it is important to consider the idling behaviours of the character as well. Like any other animations, these could be recorded and handcrafted, or, they could be generated by a motion model. Such models are theoretically capable of producing and simulating variable motions in an automatic fashion, alleviating the work of animators who can then focus on more expressive behaviours. While there is a growing interest in motion models built on data, recording enough spontaneous human motions and behaviours for the learning of the models is challenging. The setting in which the recording of the data happens is usually unnatural for the participants. In this paper, we present a data collection which protocol was designed for eliciting and capturing natural and spontaneous human idle motions. This protocol works by hiding from the participant the true intent of the data collection in order to genuinely make them wait. The dataset we collected using this protocol is also presented and is made available to the community of researchers.

References

  1. Marcella H Boynton, David B Portnoy, and Blair T Johnson. 2013. Exploring the ethics and psychological impact of deception in psychological research. IRB 35, 2 (2013), 7.Google ScholarGoogle Scholar
  2. Angelo Cafaro, Hannes Högni Vilhjálmsson, Timothy Bickmore, Dirk Heylen, Kamilla Rún Jóhannsdóttir, and Gunnar Steinn Valgarsson. 2012. First impressions: Users’ judgments of virtual agents’ personality and interpersonal attitude in first encounters. In International conference on intelligent virtual agents(Lecture Notes in Computer Science, Vol. 7502), Yukiko Nakano, Michael Neff, Ana Paiva, and Marilyn Walker (Eds.). Springer Berlin Heidelberg, 67–80. https://doi.org/10.1007/978-3-642-33197-8_7Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2017. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 7291–7299.Google ScholarGoogle ScholarCross RefCross Ref
  4. Kyunghyun Cho, Bart Van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259(2014).Google ScholarGoogle Scholar
  5. Arjan Egges, Tom Molet, and Nadia Magnenat-Thalmann. 2004. Personalised real-time idle motion synthesis. 121–130. https://doi.org/10.1109/PCCGA.2004.1348342Google ScholarGoogle Scholar
  6. Ylva Ferstl, Michael Neff, and Rachel McDonnell. 2020. Adversarial gesture generation with realistic gesture phasing. Computers & Graphics(2020).Google ScholarGoogle Scholar
  7. Michael Gleicher. 1999. Animation from observation: Motion capture and motion editing. COMPUTER GRAPHICS-NEW YORK-ASSOCIATION FOR COMPUTING MACHINERY- 33, 4(1999), 51–54.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems. 2672–2680.Google ScholarGoogle Scholar
  9. Maja Kocoń. 2013. Idle motion synthesis of human head and face in virtual reality environment. In International Conference on Serious Games Development and Applications. Springer, 299–306.Google ScholarGoogle ScholarCross RefCross Ref
  10. Taras Kucherenko, Dai Hasegawa, Gustav Eje Henter, Naoshi Kaneko, and Hedvig Kjellström. 2019. Analyzing input and output representations for speech-driven gesture generation. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents. 97–104.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Stacy Marsella, Ari Shapiro, Andrew Feng, Yuyu Xu, Margaux Lhommet, and Stefan Scherer. 2013. Towards higher quality character performance in previz. In Proceedings of the Symposium on Digital Production. 31–35.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Sungphill Moon, Youngbin Park, Dong Wook Ko, and Il Hong Suh. 2016. Multiple kinect sensor fusion for human skeleton tracking using Kalman filtering. International Journal of Advanced Robotic Systems 13, 2 (2016), 65.Google ScholarGoogle ScholarCross RefCross Ref
  13. Malgorzata Oczak and Agnieszka Niedźwieńska. 2007. Debriefing in deceptive research: A proposed new procedure. Journal of Empirical Research on Human Research Ethics 2, 3(2007), 49–59.Google ScholarGoogle ScholarCross RefCross Ref
  14. Antonio Pascual-Leone, Terence Singh, and Alan Scoboria. 2010. Using deception ethically: Practical research guidelines for researchers and reviewers.Canadian Psychology/psychologie canadienne 51, 4 (2010), 241.Google ScholarGoogle Scholar
  15. Diana Phillips Mahoney. 1997. Procedural Animation-Prodecural animation techniques, which use algorithms or mathematical expressions to drive the movement of objects or volumes over time, already are delivering some significant. Computer Graphics World-Tulsa 20, 5 (1997), 39–48.Google ScholarGoogle Scholar
  16. David Sturman. 1984. Interactive key frame animation of 3-D articulated models. In Graphics Interface, Vol. 86.Google ScholarGoogle Scholar
  17. David J Sturman. 1994. A brief history of motion capture for computer character animation. SIGGRAPH94, Course9 (1994).Google ScholarGoogle Scholar
  18. Denis Tome, Chris Russell, and Lourdes Agapito. 2017. Lifting from the deep: Convolutional 3d pose estimation from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2500–2509.Google ScholarGoogle ScholarCross RefCross Ref
  19. Yuliang Xiu, Jiefeng Li, Haoyu Wang, Yinghong Fang, and Cewu Lu. 2018. Pose Flow: Efficient Online Pose Tracking. In BMVC.Google ScholarGoogle Scholar

Index Terms

  1. IdlePose : A Dataset of Spontaneous Idle Motions
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          ICMI '21 Companion: Companion Publication of the 2021 International Conference on Multimodal Interaction
          October 2021
          418 pages
          ISBN:9781450384711
          DOI:10.1145/3461615

          Copyright © 2021 ACM

          Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 17 December 2021

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • short-paper
          • Research
          • Refereed limited

          Acceptance Rates

          Overall Acceptance Rate453of1,080submissions,42%
        • Article Metrics

          • Downloads (Last 12 months)32
          • Downloads (Last 6 weeks)5

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format