skip to main content
10.1145/3281505.3281534acmconferencesArticle/Chapter ViewAbstractPublication PagesvrstConference Proceedingsconference-collections
short-paper

Automatic transfer of musical mood into virtual environments

Published: 28 November 2018 Publication History

Abstract

This paper presents a method that automatically transforms a virtual environment (VE) according to the mood of input music. We use machine learning to extract a mood from the music. We then select images exhibiting the mood and transfer their styles to the textures of objects in the VE photorealistically or artistically. Our user study results indicate that our method is effective in transferring valence-related aspects, but not arousal-related ones. Our method can still provide novel experiences in virtual reality and speed up the production of VEs by automating its procedure.

Supplementary Material

MP4 File (a33-han.mp4)

References

[1]
Anna Aljanaki, Yi-Hsuan Yang, and Mohammad Soleymani. 2017. Developing a benchmark for emotional analysis of music. PloS one 12, 3 (2017), e0173392.
[2]
Iris Bakker, Theo van der Voordt, Peter Vink, and Jan de Boon. 2014. Pleasure, arousal, dominance: Mehrabian and Russell revisited. Current Psychology 33, 3 (2014), 405--421.
[3]
Margaret M Bradley and Peter J Lang. 1994. Measuring emotion: the self-assessment manikin and the semantic differential. Journal of behavior therapy and experimental psychiatry 25, 1 (1994), 49--59.
[4]
Leo Breiman. 2001. Random forests. Machine learning 45, 1 (2001), 5--32.
[5]
Marcus Cheetham, Lingdan Wu, Paul Pauli, and Lutz Jancke. 2015. Arousal, valence, and the uncanny valley: Psychophysiological and self-report findings. Frontiers in psychology 6 (2015), 981.
[6]
Kang Chen, Yukun Lai, Yu-Xin Wu, Ralph Robert Martin, and Shi-Min Hu. 2014. Automatic semantic modeling of indoor scenes from low-quality RGB-D data using contextual information. ACM Transactions on Graphics 33, 6 (2014).
[7]
Colin Ellard et al. 2015. Places of the Heart. Bellevue Literary Press,.
[8]
Arnaud Emilien, Ulysse Vimont, Marie-Paule Cani, Pierre Poulin, and Bedrich Benes. 2015. Worldbrush: Interactive example-based synthesis of procedural virtual worlds. ACM Transactions on Graphics (TOG) 34, 4 (2015), 106.
[9]
Florian Eyben, Felix Weninger, Florian Gross, and Björn Schuller. 2013. Recent developments in opensmile, the munich open-source multimedia feature extractor. In Proceedings of the 21st ACM international conference on Multimedia. ACM, 835--838.
[10]
Matthew Fisher, Daniel Ritchie, Manolis Savva, Thomas Funkhouser, and Pat Hanrahan. 2012. Example-based synthesis of 3D object arrangements. ACM Transactionson Graphics (TOG) 31, 6 (2012), 135.
[11]
Leon A Gatys, Alexander S Ecker, and Matthias Bethge. 2016. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2414--2423.
[12]
Li He, Hairong Qi, and Russell Zaretzki. 2015. Image color transfer to evoke different emotions based on color combinations. Signal, Image and Video Processing 9, 8 (2015), 1965--1973.
[13]
Jana Machajdik and Allan Hanbury. 2010. Affective image classification using features inspired by psychology and art theory. In Proceedings of the 18th ACM international conference on Multimedia. ACM, 83--92.
[14]
Giulia Miniero, Andrea Rurale, and Michela Addis. 2014. Effects of arousal, dominance, and their interaction on pleasure in a cultural environment. Psychology & Marketing 31, 8 (2014), 628--634.
[15]
Naila Murray, Sandra Skaff, Luca Marchesotti, and Florent Perronnin. 2011. Towards automatic concept transfer. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering. ACM, 167--176.
[16]
Chuong H Nguyen, Tobias Ritschel, Karol Myszkowski, Elmar Eisemann, and Hans-Peter Seidel. 2012. 3D material style transfer. In Computer Graphics Forum, Vol. 31. Wiley Online Library, 431--438.
[17]
Kuan-Chuan Peng, Tsuhan Chen, Amir Sadovnik, and Andrew C Gallagher. 2015. A mixed bag of emotions: Model, predict, and transfer emotion distributions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 860--868.
[18]
Ruben Michaël Smelik, Tim Tutenel, Klaas Jan de Kraker, and Rafael Bidarra. 2011. A declarative approach to procedural modeling of virtual worlds. Computers & Graphics 35, 2 (2011), 352--363.
[19]
Cameron Smith. 2016. neural-style-tf. https://github.com/cysmith/neural-style-tf. (2016).
[20]
Misha Sra, Sergio Garrido-Jurado, Chris Schmandt, and Pattie Maes. 2016. Procedurally generated virtual reality from 3D reconstructed physical space. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology. ACM, 191--200.
[21]
Misha Sra, Pattie Maes, Prashanth Vijayaraghavan, and Deb Roy. 2017. Auris: creating affective virtual spaces from music. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology. ACM, 26.

Cited By

View all
  • (2022)Mood-Driven Colorization of Virtual Indoor ScenesIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2022.315051328:5(2058-2068)Online publication date: May-2022

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
VRST '18: Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology
November 2018
570 pages
ISBN:9781450360869
DOI:10.1145/3281505
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 November 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. affect
  2. mood
  3. music
  4. transfer
  5. virtual environment

Qualifiers

  • Short-paper

Conference

VRST '18

Acceptance Rates

Overall Acceptance Rate 66 of 254 submissions, 26%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)8
  • Downloads (Last 6 weeks)2
Reflects downloads up to 17 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2022)Mood-Driven Colorization of Virtual Indoor ScenesIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2022.315051328:5(2058-2068)Online publication date: May-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media