Skip to main content

Music Visualization by Means of Comb Filter and Relaxation Time According to Human Perception

  • Conference paper
Soft Computing in Industrial Applications

Part of the book series: Advances in Intelligent and Soft Computing ((AINSC,volume 75))

Abstract

The visualization of an audio signal’s time structure is the primary step for every rhythm synchronizer system. To achieve this, principle components of the musical signal beats as strength, tempo and onset time must be detected. A filter bank of 180 different comb filters (from 60 bpm to 240 bpm) is convolved with different styles of musical signal to visualize according to human perception. In each interval, tempo in frequency domain and then its onset time in time domain are extracted. Now the key point is how to cover the gap between each two adjacent beat occurrences, from the last beat terminal to the next beat onset time. A magic number of 100 milliseconds (corresponding to human perception) is used. This interval dichotomizes equally. The first 50ms is used to stretch the signal to the relaxation point at the zero and the next one to grow it up to the next consequent beat onset. Therefore, a continuous sinusoidal signal is presented to visualize the corresponding music rhythm. Different simulation plots of various music styles and their overlaps have illustrated the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Anne, L., Runehov, C.: Sacred Or Neural? In: The Potential of Neuroscience to Explain Religious Experience, Vandenhoeck & Ruprecht (2007), ISBN 3525569807, 9783525569801

    Google Scholar 

  2. Benson, D.: Music: A Mathematical Offering. Cambridge University Press, Cambridge (2007), ISBN: 0521853877

    MATH  Google Scholar 

  3. Kamien, R.: MUSIC An Appreciation, 5th edn. McGraw-Hill, Inc., New York (1992)

    Google Scholar 

  4. Hainsworth, S.W.: Techniques for the Automated Analysis of Musical Audio. Doctor of Philosophy thesis, University of Cambridge (December 2003)

    Google Scholar 

  5. Tzanetakis, G., Essl, G., Cook, P.: Human perception and computer extraction of beat strength. In: Proc. Digital Audio Effects Workshop (DAFx), pp. 257–261 (2002)

    Google Scholar 

  6. Tzanetakis, G., Essl, G., Cook, P.: Audio Analysis using the Discrete Wavelet Transform. In: Proc. Conf. in Acoustic and Music Theory Applications, WSES (September 2001)

    Google Scholar 

  7. Scheirer, E.: Tempo and beat analysis of acoustic musical signals. Journal of the Acoustical Society of America 103(1), 588–601 (1998)

    Article  Google Scholar 

  8. Londom, J.: Hearing Rhythmic Gestures: Moving Bodies and Embodied Minds. In: Keynote Address, Music and Gesture Conference, University of East Anglia, Norwich (August 2003)

    Google Scholar 

  9. Londom, J.: Musical Rhythm: Motion, Pace and Gesture. In: Gritten, A., King, E. (eds.) Music and Gesture, pp. 126–141. Ashgate, Aldershot (2006)

    Google Scholar 

  10. Chang, Y.-Y., Lin, Y.-C.: Music Tempo (Speed) Classification, CS229 Autumn 2005 (2005)

    Google Scholar 

  11. Tzanetakis, G.: Tempo Extraction using Beat Histograms. In: Proceedings of the 1th Music Information Retrieval Evaluation eXchange, MIREX 2005 (2005)

    Google Scholar 

  12. Tzanetakis, G., Cook, P.: Musical genre classification of audio signals. IEEE Trans. Speech and Audio Processing 10(5), 293–302 (2002)

    Article  Google Scholar 

  13. Gold, B., Morgen, N.: Speech and Audio Signal Processing: Processing and Perception of Speech and Music. John Wiley and Sons, Chichester (2000)

    Google Scholar 

  14. Davies, J.B.: The psychology of music. Stanford University Press, Stanford (1978)

    Google Scholar 

  15. Krumhansl, C.L.: Rhythm and pitch in music cognition. Psychological Bulletin 126(1), 159–179 (2000)

    Article  Google Scholar 

  16. Fraisse, P.: Rhythm and Tempo. In: Deutsch, D. (ed.) Psychology of Music, pp. 149–180. Academic Press, New York (1982)

    Google Scholar 

  17. Hong, S.-d., Park, J.-w., Lee, W.-H.: Designing Audio Visual Software for Digital Interactive Art. In: IEEE Hangzhou, pp. 651–655 (November 2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Poostchi, M., Kamkar, I., Mohebbi, J. (2010). Music Visualization by Means of Comb Filter and Relaxation Time According to Human Perception. In: Gao, XZ., Gaspar-Cunha, A., Köppen, M., Schaefer, G., Wang, J. (eds) Soft Computing in Industrial Applications. Advances in Intelligent and Soft Computing, vol 75. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-11282-9_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-11282-9_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-11281-2

  • Online ISBN: 978-3-642-11282-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics