Abstract
One of the most significant challenges with sequential data is the identification of certain underlying patterns, which can be easily overlooked during human visualization. To overcome such problem, transforming data into sound (Sonification) has shown great potential to notify human about hidden patterns. Sonification is a process of mapping data into a non-speech audio format. In today’s era, with data analysis being the backbone of most infrastructures, Sonification is gaining interests in various fields like data mining, human-computer interaction, exploratory data analysis, and musical interfaces. It presents a novel way of analyzing and interacting with data. In addition, it provides visually impaired people with an attainable alternative. A considerable amount of work has been done in the field of sonification and music generation. However, producing music from data using machine learning and deep learning techniques is inadequately explored. The conventional sonification methods require human involvement and knowledge of music to produce a tune that is appealing to the ears. This method is time-consuming and requires specialized expertise. In this paper, we aim towards developing a system that molds any time-dependent data into music while retaining the original characteristics of the data using deep learning techniques such as LSTM (Long-short Term Memory). The goals of this research are 1) generate music that is melodious and resembles the music composed by humans, and 2) help people not only auralize but also understand the associated data trend through the generated music. Quantitative and qualitative evaluations were used to validate our approach.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Classical piano midi page website. http://www.piano-midi.de/
Device for auditory monitoring of pulse and blood oxygen level. https://www.medicalnewstoday.com/articles/318489#What-to-expect-from-pulse-oximetry
University of Washington. https://www.washington.edu/news/2019/04/22/brains-of-blind-people-adapt-to-sharpen-sense-of-hearing-study-shows/
Bestdigitalpianoguides.com: piano chords and melodies. https://bestdigitalpianoguides.com/piano-chords-and-melodies
Chu, H., Urtasun, R., Fidler, S.: Song from PI: a musically plausible network for pop music generation. CoRR abs/1611.03477 http://arxiv.org/abs/1611.03477 (2016)
Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014)
Cohut, M.: Medical news today website. https://www.medicalnewstoday.com/articles/325032
Das, A.: RNN simplified- a beginner’s guide. https://towardsdatascience.com/rnn-simplified-a-beginnersguide- cf3ae1a8895b. Accessed 07 June 2019
Eck, D., Schmidhuber, J.: A first look at music composition using LSTM recurrent neural networks. Istituto Dalle Molle Di Studi Sull Intelligenza Artificiale 103, 48 (2002)
Sonification for exploratory data analysis. In: Hermann, T. (ed.) Sonification for Exploratory Data Analysis. Bielefeld University, Bielefeld, Germany (2002)
Magenta. https://magenta.tensorflow.org/
Mascetti, S., Picinali, L., Gerino, A., Ahmetovic, D., Bernareggi, C.: Sonification of guidance data during road crossing for people with visual impairments or blindness. CoRR abs/1506.07272 http://arxiv.org/abs/1506.07272 (2015)
Mozer, M.C.: Neural network music composition by prediction: exploring the benefits of psychoacoustic constraints and multi-scale processing. Connect. Sci. 6(2–3), 247–280 (1994)
Boulanger-Lewandowski, N., Bengio, Y., Vincent, P.: Modeling temporal dependencies in high-dimensional sequences: application to polyphonic music generation and transcription. arXiv:1206.6392 (2012)
Phi, M.: Illustrated guide to LSTM’s and GRU’s: a step by step explanation. https://towardsdatascience.com/illustrated-guide-to-lstms-and-gru-s-a-step-by-step-explanation-44e9eb85bf21
Raffel, C., Ellis, D.P.W.: Intuitive analysis, creation and manipulation of midi data with pretty_midi. In: 15th International Conference on Music Information Retrieval Late Breaking and Demo Papers (2014)
Cuthbert, M.S., Ariza, C.: music21: a toolkit for computer-aided musicology and symbolic music data (2010)
Hananoi, S., Muraoka, K., Kiyoki, Y.: A music composition system with time-series data for sound design in next-generation sonification environment. In: International Electronics Symposium (IES), pp. 380–384 (2016)
TwoTone. https://twotone.io/
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Dawar, J., Raheja, P., Vashisth, U., Hajari, N., Cheng, I. (2022). Unleashing the Potential of Data Analytics Through Music. In: Berretti, S., Su, GM. (eds) Smart Multimedia. ICSM 2022. Lecture Notes in Computer Science, vol 13497. Springer, Cham. https://doi.org/10.1007/978-3-031-22061-6_25
Download citation
DOI: https://doi.org/10.1007/978-3-031-22061-6_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-22060-9
Online ISBN: 978-3-031-22061-6
eBook Packages: Computer ScienceComputer Science (R0)