ABSTRACT
With the recent surge in interest in autonomous sensory meridian response (ASMR), numerous contents that stimulate the user’s visual and auditory senses by using various triggers are consumed a lot. However, due to individual differences, people have different reactions to triggers, therefore, have to manually search for the content that suits their preferences. In this paper, we present an AI-based ASMR content generation service and conduct a preliminary user study to figure out its feasibility and usability. Through a user study with 42 participants, we find that the AI-based customized ASMR content generation service could be used to satisfy users. However, we could also learn about various issues to be addressed for better user experience and satisfaction, such as audio-video synchronization, appropriateness, level of noise, and so on.
Supplemental Material
- Emma L. Barratt and Nick J. Davis. 2015. Autonomous Sensory Meridian Response (ASMR): a flow-like mental state. PeerJ 3 (March 2015), e851. https://doi.org/10.7717/peerj.851Google ScholarCross Ref
- Ava Bartolome, Nguyen Binh Ha, and Shuo Niu. 2021. Investigating Multimodal Interactions and Parasocial Attractiveness in YouTube ASMR Videos. In Companion Publication of the 2021 Conference on Computer Supported Cooperative Work and Social Computing. 14–18.Google Scholar
- HJ Engelbregt, Kim Brinkman, CCE van Geest, Mona Irrmischer, and Jan Berend Deijen. 2022. The effects of autonomous sensory meridian response (ASMR) on mood, attention, heart rate, skin conductance and EEG in healthy young adults. Experimental Brain Research 240, 6 (2022), 1727–1742.Google ScholarCross Ref
- Biyang Guo, Yeyun Gong, Yelong Shen, Songqiao Han, Hailiang Huang, Nan Duan, and Weizhu Chen. 2022. GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation. https://doi.org/10.48550/ARXIV.2211.10330Google ScholarCross Ref
- Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, and Jie Tang. 2022. CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers. https://doi.org/10.48550/ARXIV.2205.15868Google ScholarCross Ref
- Vladimir Iashin and Esa Rahtu. 2021. Taming Visually Guided Sound Generation. CoRR abs/2110.08791 (2021). arXiv:2110.08791https://arxiv.org/abs/2110.08791Google Scholar
- Helle Breth Klausen. 2019. ’Safe and sound’: What technologically-mediated ASMR is capable of through sound. SoundEffects - An Interdisciplinary Journal of Sound and Sound Experience 8, 1 (Jul. 2019), 87–103. https://www.soundeffects.dk/article/view/115035Google ScholarCross Ref
- Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven C. H. Hoi. 2022. CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning. https://doi.org/10.48550/ARXIV.2207.01780Google ScholarCross Ref
- Hugh S Manon. 2018. ASMR mania, trigger-chasing, and the anxiety of digital repletion. In Lacan and the Nonhuman. Springer, 227–248.Google Scholar
- Agnieszka B. Janik McErlean and Michael J. Banissy. 2017. Assessing Individual Variation in Personality and Empathy Traits in Self-Reported Autonomous Sensory Meridian Response. Multisensory Research 30, 6 (2017), 601 – 613. https://doi.org/10.1163/22134808-00002571Google ScholarCross Ref
- Shuo Niu, Hugh S Manon, Ava Bartolome, Nguyen Binh Ha, and Keegan Veazey. 2022. Close-up and Whispering: An Understanding of Multimodal and Parasocial Interactions in YouTube ASMR videos. In CHI Conference on Human Factors in Computing Systems. 1–18.Google ScholarDigital Library
- Giulia Lara Poerio, Emma Blakey, Thomas J. Hostler, and Theresa Veltri. 2018. More than a feeling: Autonomous sensory meridian response (ASMR) is characterized by reliable changes in affect and physiology. PLOS ONE 13 (06 2018), 1–18. https://doi.org/10.1371/journal.pone.0196645Google ScholarCross Ref
- Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-Shot Text-to-Image Generation. In Proceedings of the 38th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 139), Marina Meila and Tong Zhang (Eds.). PMLR, 8821–8831. https://proceedings.mlr.press/v139/ramesh21a.htmlGoogle Scholar
- Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-Resolution Image Synthesis With Latent Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10684–10695.Google ScholarCross Ref
- Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. 2022. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. https://doi.org/10.48550/ARXIV.2205.11487Google ScholarCross Ref
- Alex Silady. 2017. The economics of ASMR. https://smartasset.com/insights/the-economics-of-asmrGoogle Scholar
- Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, Devi Parikh, Sonal Gupta, and Yaniv Taigman. 2022. Make-A-Video: Text-to-Video Generation without Text-Video Data. https://doi.org/10.48550/ARXIV.2209.14792Google ScholarCross Ref
- Naomi Smith and Anne-Marie Snider. 2019. ASMR, affect and digitally-mediated intimacy. Emotion, Space and Society 30 (2019), 41–48. https://doi.org/10.1016/j.emospa.2018.11.002Google ScholarCross Ref
- Niilo V. Valtakari, Ignace T. C. Hooge, Jeroen S. Benjamins, and Anouk Keizer. 2019. An eye-tracking approach to Autonomous sensory meridian response (ASMR): The physiology and nature of tingles in relation to the pupil. PLOS ONE 14, 12 (12 2019), 1–13. https://doi.org/10.1371/journal.pone.0226692Google ScholarCross Ref
- Yue Wang, Weishi Wang, Shafiq Joty, and Steven C. H. Hoi. 2021. CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation. https://doi.org/10.48550/ARXIV.2109.00859Google ScholarCross Ref
- Karren Yang, Bryan Russell, and Justin Salamon. 2020. Telling Left From Right: Learning Spatial Correspondence of Sight and Sound. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarCross Ref
Index Terms
- Tingle Just for You: A Preliminary Study of AI-based Customized ASMR Content Generation
Recommendations
Close-up and Whispering: An Understanding of Multimodal and Parasocial Interactions in YouTube ASMR videos
CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing SystemsASMR (Autonomous Sensory Meridian Response) has grown to immense popularity on YouTube and drawn HCI designers’ attention to its effects and applications in design. YouTube ASMR creators incorporate visual elements, sounds, motifs of touching and ...
Find Your ASMR: A Perceptual Retrieval Interface for Autonomous Sensory Meridian Response Videos
HCI International 2022 – Late Breaking Papers: HCI for Health, Well-being, Universal Access and Healthy AgingAbstractAutonomous sensory meridian response (ASMR) is a type of video contents designed to help people relax and feel comfortable. Users usually retrieve ASMR contents from various video websites using only keywords. However, it is challenging to examine ...
Foley Music: Learning to Generate Music from Videos
Computer Vision – ECCV 2020AbstractIn this paper, we introduce Foley Music, a system that can synthesize plausible music for a silent video clip about people playing musical instruments. We first identify two key intermediate representations for a successful video to music ...
Comments