Skip to main content

Unleashing the Potential of Data Analytics Through Music

  • Conference paper
  • First Online:
Smart Multimedia (ICSM 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13497))

Included in the following conference series:

  • 470 Accesses

Abstract

One of the most significant challenges with sequential data is the identification of certain underlying patterns, which can be easily overlooked during human visualization. To overcome such problem, transforming data into sound (Sonification) has shown great potential to notify human about hidden patterns. Sonification is a process of mapping data into a non-speech audio format. In today’s era, with data analysis being the backbone of most infrastructures, Sonification is gaining interests in various fields like data mining, human-computer interaction, exploratory data analysis, and musical interfaces. It presents a novel way of analyzing and interacting with data. In addition, it provides visually impaired people with an attainable alternative. A considerable amount of work has been done in the field of sonification and music generation. However, producing music from data using machine learning and deep learning techniques is inadequately explored. The conventional sonification methods require human involvement and knowledge of music to produce a tune that is appealing to the ears. This method is time-consuming and requires specialized expertise. In this paper, we aim towards developing a system that molds any time-dependent data into music while retaining the original characteristics of the data using deep learning techniques such as LSTM (Long-short Term Memory). The goals of this research are 1) generate music that is melodious and resembles the music composed by humans, and 2) help people not only auralize but also understand the associated data trend through the generated music. Quantitative and qualitative evaluations were used to validate our approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Classical piano midi page website. http://www.piano-midi.de/

  2. Device for auditory monitoring of pulse and blood oxygen level. https://www.medicalnewstoday.com/articles/318489#What-to-expect-from-pulse-oximetry

  3. University of Washington. https://www.washington.edu/news/2019/04/22/brains-of-blind-people-adapt-to-sharpen-sense-of-hearing-study-shows/

  4. Bestdigitalpianoguides.com: piano chords and melodies. https://bestdigitalpianoguides.com/piano-chords-and-melodies

  5. Chu, H., Urtasun, R., Fidler, S.: Song from PI: a musically plausible network for pop music generation. CoRR abs/1611.03477 http://arxiv.org/abs/1611.03477 (2016)

  6. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014)

  7. Cohut, M.: Medical news today website. https://www.medicalnewstoday.com/articles/325032

  8. Das, A.: RNN simplified- a beginner’s guide. https://towardsdatascience.com/rnn-simplified-a-beginnersguide- cf3ae1a8895b. Accessed 07 June 2019

  9. Eck, D., Schmidhuber, J.: A first look at music composition using LSTM recurrent neural networks. Istituto Dalle Molle Di Studi Sull Intelligenza Artificiale 103, 48 (2002)

    Google Scholar 

  10. Sonification for exploratory data analysis. In: Hermann, T. (ed.) Sonification for Exploratory Data Analysis. Bielefeld University, Bielefeld, Germany (2002)

    Google Scholar 

  11. Magenta. https://magenta.tensorflow.org/

  12. Mascetti, S., Picinali, L., Gerino, A., Ahmetovic, D., Bernareggi, C.: Sonification of guidance data during road crossing for people with visual impairments or blindness. CoRR abs/1506.07272 http://arxiv.org/abs/1506.07272 (2015)

  13. Mozer, M.C.: Neural network music composition by prediction: exploring the benefits of psychoacoustic constraints and multi-scale processing. Connect. Sci. 6(2–3), 247–280 (1994)

    Article  Google Scholar 

  14. Boulanger-Lewandowski, N., Bengio, Y., Vincent, P.: Modeling temporal dependencies in high-dimensional sequences: application to polyphonic music generation and transcription. arXiv:1206.6392 (2012)

  15. Mido. https://mido.readthedocs.io/en/latest/index.html

  16. Phi, M.: Illustrated guide to LSTM’s and GRU’s: a step by step explanation. https://towardsdatascience.com/illustrated-guide-to-lstms-and-gru-s-a-step-by-step-explanation-44e9eb85bf21

  17. Raffel, C., Ellis, D.P.W.: Intuitive analysis, creation and manipulation of midi data with pretty_midi. In: 15th International Conference on Music Information Retrieval Late Breaking and Demo Papers (2014)

    Google Scholar 

  18. Cuthbert, M.S., Ariza, C.: music21: a toolkit for computer-aided musicology and symbolic music data (2010)

    Google Scholar 

  19. Hananoi, S., Muraoka, K., Kiyoki, Y.: A music composition system with time-series data for sound design in next-generation sonification environment. In: International Electronics Symposium (IES), pp. 380–384 (2016)

    Google Scholar 

  20. TwoTone. https://twotone.io/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jatin Dawar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dawar, J., Raheja, P., Vashisth, U., Hajari, N., Cheng, I. (2022). Unleashing the Potential of Data Analytics Through Music. In: Berretti, S., Su, GM. (eds) Smart Multimedia. ICSM 2022. Lecture Notes in Computer Science, vol 13497. Springer, Cham. https://doi.org/10.1007/978-3-031-22061-6_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-22061-6_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-22060-9

  • Online ISBN: 978-3-031-22061-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics