skip to main content
10.1145/3411764.3445358acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Improving Viewing Experiences of First-Person Shooter Gameplays with Automatically-Generated Motion Effects

Published: 07 May 2021 Publication History

Abstract

In recent times, millions of people enjoy watching video gameplays at an eSports stadium or home. We seek a method that improves gameplay spectator or viewer experiences by presenting multisensory stimuli. Using a motion chair, we provide the motion effects automatically generated from the audiovisual stream to the viewers watching a first-person shooter (FPS) gameplay. The motion effects express the game character’s movement and gunfire action. We describe algorithms for the computation of such motion effects developed using computer vision techniques and deep learning. By a user study, we demonstrate that our method of providing motion effects significantly improves the viewing experiences of FPS gameplay. The contributions of this paper are with the motion synthesis algorithms integrated for FPS games and the empirical evidence for the benefits of experiencing multisensory gameplays.

Supplementary Material

MP4 File (3411764.3445358_videofigure.mp4)
Supplemental video
MP4 File (3411764.3445358_videopreview.mp4)
Preview video

References

[1]
CJ 4DPLEX. 2020. What is 4DX: Motion seat & signature effects. Retrieved Sep. 16, 2020 from https://www.cj4dx.com/aboutus/aboutus.php
[2]
D. Ablart, C. Velasco, and M. Obrist. 2017. Integrating mid-air haptics into movie experiences. In Proceedings of the 4th ACM International Conference on Interactive Experiences for TV and Online Video. ACM, New York, NY, USA, 77–84.
[3]
Akshita, H. Alagarai Sampath, I. Bipin, E. Lee, and Y. Bae. 2015. Towards multimodal affective feedback: Interaction between visual and haptic modalities. In Proceedings of the 33rd CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 2043–2052.
[4]
BANDAI NAMCO Amusement. 2020. VR Zone. Retrieved Sep. 15, 2020 from https://vrzone-pic.com/
[5]
Y. Arslan. 2017. Impulsive sound detection by a novel energy formula and its usage for gunshot recognition. (Jun 2017). arXiv:1706.08759 [cs.SD]
[6]
F. Aurino, M. Folla, F. Gargiulo, V. Moscato, A. Picariello, and C. Sansone. 2014. One-class SVM based approach for detecting anomalous audio events. In Proceedings of the 6th International Conference on Intelligent Networking and Collaborative Systems. IEEE, New York, NY, USA, 145–151.
[7]
Blooloop. 2019. Dolby Atmos will immerse guests in sound at new Hollywood Esports MX4D Theatre from MediaMation. Retrieved July 26, 2020 from https://blooloop.com/news/dolby-atmos-hollywood-esports-mx4d-theatre/
[8]
Blooloop. 2019. MediaMation expands MX4D on five continents. Retrieved July 26, 2020 from https://blooloop.com/news/mediamation-expands-mx4d-five-continents/
[9]
E. Cakir, T. Heittola, H. Huttunen, and T. Virtanen. 2015. Polyphonic sound event detection using multi label deep neural networks. In Proceedings of the 27th International Joint Conference on Neural Networks. IEEE, New York, NY, USA, 1–7.
[10]
E. Cakir, G. Parascandolo, T. Heittola, H. Huttunen, and T. Virtanen. 2017. Convolutional recurrent neural networks for polyphonic sound event detection. IEEE/ACM Transactions on Audio, Speech, and Language Processing 25, 6(2017), 1291–1303.
[11]
S. Charleer, K. Gerling, F. Gutiérrez, H. Cauwenbergh, B. Luycx, and K. Verbert. 2018. Real-time dashboards to support esports spectating. In Proceedings of the 5th Annual Symposium on Computer-Human Interaction in Play. ACM, New York, NY, USA, 59–71.
[12]
G. Cheung and J. Huang. 2011. Starcraft from the stands: Understanding the game spectator. In Proceedings of the 30th CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 763–772.
[13]
CJ. 2019. CGV 4DX surpasses 700 locations worldwide. Retrieved July 26, 2020 from http://english.cj.net/cj_now/view.asp?bs_seq=14331&schBsTp=1&schTxt=
[14]
Wikipedia contributors. 2020. D-Box Technologies. Retrieved July 26, 2020 from https://en.wikipedia.org/w/index.php?title=Plagiarism&oldid=5139350
[15]
A. Covaci, L. Zou, I. Tal, G. Muntean, and G. Ghinea. 2018. Is multimedia multisensorial? - A review of mulsemedia systems. Comput. Surveys 51, 5 (2018), 35.
[16]
F. Danieau, J. Bernon, J. Fleureau, P. Guillotel, N. Mollet, M. Christie, and A. Lécuyer. 2013. H-Studio: An authoring tool for adding haptic and motion effects to audiovisual content. In Proceedings of the 26th ACM Symposium on User Interface Software and Technology. ACM, New York, NY, USA, 83–84.
[17]
J. Duchi, E. Hazan, and Y. Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization.Journal of Machine Learning Research 12, 7 (2011), 2121 – 2159.
[18]
A. Dufaux, L. Besacier, M. Ansorge, and F. Pellandini. 2000. Automatic sound detection and recognition for noisy environment. In Proceedings of the 10th European Signal Processing Conference. IEEE, New York, NY, USA, 1–4.
[19]
BLIZZARD ENTERTAINMENT. 2020. McCree. Retrieved Sep. 16, 2020 from https://playoverwatch.com/en-us/heroes/mccree/
[20]
BLIZZARD ENTERTAINMENT. 2020. Overwatch. GAME.
[21]
M. Feng, A. Dey, and R. W. Lindeman. 2016. An initial exploration of a multi-sensory design space: Tactile support for walking in immersive virtual environments. In Proceedings of the 11th 2016 IEEE Symposium on 3D User Interfaces. IEEE, New York, NY, USA, 95–104.
[22]
M. Feng, R. W. Lindeman, H. Abdel-Moati, and J. C. Lindeman. 2015. Haptic chairIO: A system to study the effect of wind and floor vibration feedback on spatial orientation in VEs. In Proceedings of the 10th IEEE Symposium on 3D User Interfaces. IEEE, New York, NY, USA, 149–150.
[23]
G. Freeman and D. Y. Wohn. 2017. eSports as an emerging research context at CHI. In Proceedings of the 36th CHI Conference Extended Abstracts on Human Factors in Computing Systems. IEEE, New York, NY, USA, 1601–1608.
[24]
L. Gerosa, G. Valenzise, M. Tagliasacchi, F. Antonacci, and A. Sarti. 2007. Scream and gunshot detection in noisy environments. In Proceedings of the 15th European Signal Processing Conference. IEEE, New York, NY, USA, 1216–1220.
[25]
G. Ghinea, C. Timmerer, W. Lin, and S. R. Gulliver. 2014. Mulsemedia: State of the art, perspectives, and challenges. ACM Transactions on Multimedia Computing Communications, and Applications 11, 1s(2014), 1–23.
[26]
D. Gongora, H. Nagano, M. Konyo, and S. Tadokoro. 2017. Vibrotactile rendering of camera motion for bimanual experience of first-person view videos. In Proceedings of the 7th IEEE World Haptics Conference. IEEE, New York, NY, USA, 454–459.
[27]
F. E. Guedry. 1974. Vestibular system part 2: Psychophysics, applied aspects and general interpretations. Springer, Berlin, Heidelberg. 3–154 pages.
[28]
Juho. Hamari and Max Sjöblom. 2017. What is eSports and why do people watch it?Internet Research 27, 2 (2017), 211–232.
[29]
I. Hwang and S. Choi. 2014. Improved haptic music player with auditory saliency estimation. In Proceedings of the 9th EuroHaptics: International Conference on Human Haptic Sensing and Touch Enabled Computer Applications. Springer, Berlin, Heidelberg, 232–240.
[30]
Y. Ikei, S. Shimabukuro, S. Kato, Y. Okuya, K. Abe, K. Hirota, and T. Amemiya. 2014. Rendering of virtual walking sensation by a passive body motion. In Proceedings of the 9th EuroHaptics: International Conference on Human Haptic Sensing and Touch Enabled Computer Applications. Springer, Berlin, Heidelberg, 150–157.
[31]
S. Ioffe and C. Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. (Feb 2015). arXiv:1502.03167 [cs.LG]
[32]
A. Israr and I. Poupyrev. 2011. Tactile brush: Drawing on skin with a tactile grid display. In Proceedings of the 30th CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 2019–2028.
[33]
M. Kaytoue, A. Silva, L. Cerf, W. Meira, and C. Raïssi. 2012. Watch me playing, I am a professional: A first study on video game live streaming. In Proceedings of the 21st International Conference on World Wide Web. ACM, New York, NY, USA, 1181–1188.
[34]
M. Kim, S. Lee, and S. Choi. 2014. Saliency-driven real-time video-to-tactile translation. IEEE Transactions on Haptics 7, 3 (2014), 394–404.
[35]
S. Kim. 2013. Authoring multisensorial content. Signal Processing: Image Communication 28, 2 (2013), 162 – 167.
[36]
J. Lee and S. Choi. 2012. Evaluation of vibrotactile pattern design using vibrotactile score. In Proceedings of the 7th IEEE Haptics Symposium. IEEE, New York, NY, USA, 231–238.
[37]
J. Lee and S. Choi. 2013. Real-time perception-level translation from audio signals to vibrotactile effects. In Proceedings of the 32nd CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 2567–2576.
[38]
J. Lee, B. Han, and S. Choi. 2016. Interactive motion effects design for a moving object in 4D films. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology. ACM, New York, NY, USA, 219–228.
[39]
J. Lee, B. Han, and S. Choi. 2016. Motion effects synthesis for 4D films. IEEE Transactions on Visualization and Computer Graphics 22, 10(2016), 2300–2314.
[40]
J. Lee, J. Ryu, and S. Choi. 2009. Vibrotactile score: A score metaphor for designing vibrotactile patterns. In Proceedings of the 3rd World Haptics Conference. IEEE, New York, NY, USA, 302–307.
[41]
J. Li, W. Dai, F. Metze, S. Qu, and S. Das. 2017. A comparison of deep learning methods for environmental sound detection. In Proceedings of the 42nd IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, New York, NY, USA, 126–130.
[42]
A. Medhekar, V. Chiluka, and A. Patait. 2019. An introduction to the NVIDIA optical flow SDK. Retrieved Sep. 11, 2020 from https://developer.nvidia.com/blog/opencv-optical-flow-algorithms-with-nvidia-turing-gpus/
[43]
MediaMation. [n.d.]. MX4D esports EFX theaters: The worlds first motion esports 4D theater network. http://mx-4d.com/esports
[44]
D. Merrill, H. Raffle, and R. Aimi. 2008. The sound of touch: Physical manipulation of digital sound. In Proceedings of the 27th CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 739–742.
[45]
I. Miranda and A. Lima. 2017. Impulsive sound detection directly in sigma-delta domain. Archives of Acoustics 42, 2 (2017), 255–261.
[46]
F. F. Mueller, R. Byrne, J. Andres, and R. Patibanda. 2018. Experiencing the body as play. In Proceedings of the 37th CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, Paper No. 210, 1–13.
[47]
M. A. Nahon and L. D. Reid. 1990. Simulator motion-drive algorithms: A designer’s perspective. Journal of Guidance, Control, and Dynamics 13, 2 (1990), 356–362.
[48]
S. Nanayakkara, E. Taylor, L. Wyse, and S. H. Ong. 2009. An enhanced musical experience for the deaf: Design and evaluation of a music display and a haptic chair. In Proceedings of the 28th CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 337–346.
[49]
G. Park and S. Choi. 2018. Tactile information transmission by 2D stationary phantom sensations. In Proceedings of the 37th CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, Paper No. 258, 1–12.
[50]
A. Pikrakis, T. Giannakopoulos, and S. Theodoridis. 2008. Gunshot detection in audio streams from movies by means of dynamic programming and Bayesian networks. In Proceedings of the 33rd IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, New York, NY, USA, 21–24.
[51]
S. U. Rehman, J. Sun, L. Liu, and H. Li. 2008. Turn your mobile into the ball: Rendering live football game using vibration. IEEE Transactions on Multimedia 10, 6 (2008), 1022–1033.
[52]
E. Rosten and T. Drummond. 2006. Machine learning for high-speed corner detection. In Proceedings of the 9th European Conference on Computer Vision. Springer, Berlin, Heidelberg, 430–443.
[53]
O. S. Schneider, A. Israr, and K. E. MacLean. 2015. Tactile animation by direct manipulation of grid displays. In Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology. ACM, New York, NY, USA, 21–30.
[54]
O. S. Schneider and K. E. MacLean. 2016. Studying design process and example use with Macaron, a web-based vibrotactile effect editor. In Proceedings of the 9th IEEE Haptics Symposium. IEEE, New York, NY, USA, 52–58.
[55]
J. Seo, S. Mun, J. Lee, and S. Choi. 2018. Substituting motion effects with vibrotactile effects for 4D experiences. In Proceedings of the 37th CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, Paper No. 428, 1–6.
[56]
J. Seo, R. H. Osgouei, S. Chung, and S. Choi. 2016. Vibrotactile rendering of gunshot events for 4D films. In Proceedings of the 25th IEEE Haptics Symposium, Vol. 2. IEEE, New York, NY, USA, 384–386.
[57]
S. Shin, K. Ha, H. Yun, and Y. Nam. 2016. Realistic media authoring tool based on MPEG-V international standard. In Proceedings of the 8th International Conference on Ubiquitous and Future Networks. IEEE, New York, NY, USA, 730–732.
[58]
S. Shin, B. Yoo, and S. Han. 2014. A framework for automatic creation of motion effects from theatrical motion pictures. Multimedia Systems 20, 3 (2014), 327–346.
[59]
Y. Sulema. 2016. Mulsemedia vs. multimedia: State of the art and future trends. In Proceedings of the 2016 International Conference on Systems, Signals and Image Processing. IEEE, New York, NY, USA, 1–5.
[60]
A. Tanaka and A. Parkinson. 2016. Haptic wave: A cross-modal interface for visually impaired audio producers. In Proceedings of the 35th CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 2150–2161.
[61]
B. S. Tekin and S. Reeves. 2017. Ways of spectating: Unravelling spectator participation in kinect play. In Proceedings of the 36th CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1558–1570.
[62]
B. Uzkent, B. Barkana, and H. Cevikalp. 2012. Non-speech environmental sound classification using SVMs with a new set of features. International Journal of Innovative Computing, Information and Control 8, 5(2012), 3511–3524.
[63]
M. Werlberger, W. Trobin, T. Pock, A. Wedel, D. Cremers, and H. Bischof. 2009. Anisotropic Huber-L1 optical flow. In Proceedings of the 23rd British Machine Vision Conference. BMVA Press, Durham, UK, 1–7.
[64]
Y. Yoo, I. Hwang, and S. Choi. 2014. Consonance of vibrotactile chords. IEEE Transactions on Haptics 7, 1 (2014), 3–13.
[65]
K. Yoshida, S. Inoue, Y. Makino, and H. Shinoda. 2017. VibVid: VIBration Estimation from VIDeo by using neural network. In Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments. Eurographics Association, Goslar, DEU, 37–44.
[66]
G. Yun, S. Oh, and S. Choi. 2019. Seamless phantom sensation moving across a wide range of body. In Proceedings of the 8th IEEE World Haptics Conference. IEEE, New York, NY, USA, 616–621.
[67]
S. Zhao, O. Schneider, R. Klatzky, J. Lehman, and A. Israr. 2014. FeelCraft: Crafting tactile experiences for media using a feel effect library. In Proceedings of the Adjunct Publication of the 27th Annual ACM Symposium on User Interface Software and Technology. ACM, New York, NY, USA, 51–52.
[68]
K. Łopatka, J. Kotus, and A. Czyżewski. 2016. Detection, classification and localization of acoustic events in the presence of background noise for acoustic surveillance of hazardous situations. Multimedia Tools and Applications 75, 17 (2016), 10407–10439.

Cited By

View all
  • (2024)GAP FILLING ALGORITHM FOR MOTION CAPTURE DATA TO CREATE REALISTIC VEHICLE ANIMATIONApplied Computer Science10.35784/acs-2024-2620:3(17-33)Online publication date: 30-Sep-2024
  • (2024)SonoHaptics: An Audio-Haptic Cursor for Gaze-Based Object Selection in XRProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676384(1-19)Online publication date: 13-Oct-2024
  • (2024)Video2Haptics: Converting Video Motion to Dynamic Haptic Feedback with Bio-Inspired Event ProcessingIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.336046830:12(7717-7735)Online publication date: Dec-2024
  • Show More Cited By

Index Terms

  1. Improving Viewing Experiences of First-Person Shooter Gameplays with Automatically-Generated Motion Effects
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image ACM Conferences
          CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
          May 2021
          10862 pages
          ISBN:9781450380966
          DOI:10.1145/3411764
          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Sponsors

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 07 May 2021

          Permissions

          Request permissions for this article.

          Check for updates

          Author Tags

          1. automatic generation
          2. game
          3. gameplay
          4. motion effect
          5. viewing experience

          Qualifiers

          • Research-article
          • Research
          • Refereed limited

          Funding Sources

          • Samsung Research Funding & Incubation Center of Samsung Electronics

          Conference

          CHI '21
          Sponsor:

          Acceptance Rates

          Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

          Upcoming Conference

          CHI 2025
          ACM CHI Conference on Human Factors in Computing Systems
          April 26 - May 1, 2025
          Yokohama , Japan

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)114
          • Downloads (Last 6 weeks)8
          Reflects downloads up to 17 Jan 2025

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)GAP FILLING ALGORITHM FOR MOTION CAPTURE DATA TO CREATE REALISTIC VEHICLE ANIMATIONApplied Computer Science10.35784/acs-2024-2620:3(17-33)Online publication date: 30-Sep-2024
          • (2024)SonoHaptics: An Audio-Haptic Cursor for Gaze-Based Object Selection in XRProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676384(1-19)Online publication date: 13-Oct-2024
          • (2024)Video2Haptics: Converting Video Motion to Dynamic Haptic Feedback with Bio-Inspired Event ProcessingIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.336046830:12(7717-7735)Online publication date: Dec-2024
          • (2024)Telemetry-Based Haptic Rendering for Racing Game Experience ImprovementIEEE Transactions on Haptics10.1109/TOH.2024.335788517:1(72-79)Online publication date: Jan-2024
          • (2024)Effects of Contact Force on Vibrotactile Perceived Intensity Across the Upper BodyIEEE Transactions on Haptics10.1109/TOH.2024.335376117:1(14-19)Online publication date: Jan-2024
          • (2024)Sound-to-Touch Crossmodal Pitch Matching for Short SoundsIEEE Transactions on Haptics10.1109/TOH.2023.333822417:1(2-7)Online publication date: Jan-2024
          • (2024)Method for Audio-to-Tactile Cross-Modality Generation Based on Residual U-NetIEEE Transactions on Instrumentation and Measurement10.1109/TIM.2023.333645373(1-14)Online publication date: 2024
          • (2024)HapMotion: motion-to-tactile framework with wearable haptic devices for immersive VR performance experienceVirtual Reality10.1007/s10055-023-00910-z28:1Online publication date: 9-Jan-2024
          • (2024)Audiovisual-Haptic Simultaneity Perception Across the Body for Multisensory ApplicationsHaptics: Understanding Touch; Technology and Systems; Applications and Interaction10.1007/978-3-031-70058-3_4(43-55)Online publication date: 30-Jun-2024
          • (2023)DrivingVibe: Enhancing VR Driving Experience using Inertia-based Vibrotactile Feedback around the HeadProceedings of the ACM on Human-Computer Interaction10.1145/36042537:MHCI(1-22)Online publication date: 13-Sep-2023
          • Show More Cited By

          View Options

          Login options

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format.

          HTML Format

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media