Skip to main content
Log in

Singing function

Exploring auditory graphs with a vowel based sonification

  • Original Paper
  • Published:
Journal on Multimodal User Interfaces Aims and scope Submit manuscript

Abstract

We present in this paper SingingFunction, a vowel-based sonification strategy for mathematical functions. Within the research field of auditory graphs as representation of scalar functions, we focus in SingingFunction on important aspects of sound design, which allow to better distinguish function shapes as auditory gestalts. SingingFunction features the first vowel-based synthesis for function sonification, and allows for a seamless integration of higher derivatives of the function into a single sound stream. We present further the results of a psycho physical experiment, where we compare the effectiveness of function sonifications based on either mapping only f′(x), or including hierarchically further information about the first derivatives f′(x), or the second derivative f″(x). Further we look at interactivity as an important factor and report interesting effects across all 3 sonification methods by comparing interactive explorations versus simple playback of sonified functions. Finally, we discuss SingingFunction within the context of existing function sonifications, and possible evaluation methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Mansur DL, Blattner MM, Joy KI (1985) Sound graphs: a numerical data analysis method for the blind. J Med Syst 9(3) 163–174

    Google Scholar 

  2. Bonebright TL (2005) A suggested agenda for auditory graph research, pp 398–402, Limerick, Ireland. Department of Computer Science and Information Systems, University of Limerick

    Google Scholar 

  3. Bonebright TL, Nees MA, Connerley TT, McCain GR (2001) Testing the effectiveness of sonified graphs for education: a programmatic research project. In: Proc int conf auditory display. Citeseer, New York, pp 62–66

    Google Scholar 

  4. Stockman T, Nickerson LV, Hind G (2005) Auditory graphs: a summary of current experience and towards a research agenda, pp 420–422, Limerick, Ireland. Department of Computer Science and Information Systems, University of Limerick

    Google Scholar 

  5. Harrar L, Stockman T (2007) Designing auditory graph overviews: an examination of discrete vs continuous sound and the influence of presentation speed, pp 299–305, Montreal, Canada. Schulich School of Music, McGill University

    Google Scholar 

  6. Hetzler SM, Tardiff RM (2006) Two tools for integrating sonification into calculus instruction. In: Proceedings of the twelfth international conference on auditory display (ICAD2006), pp 281–284

    Google Scholar 

  7. Grond F, Droßard T, Hermann T (2010) Sonifunction experiments with a functionbrowser for the blind. In: Proceedings of the 16th international conference on auditory display, pp 15–21. ICAD, Washington

    Google Scholar 

  8. Nees MA, Walker BN (2007) Listener, task, and auditory graph: toward a conceptual model of auditory graph comprehension, pp 266–273, Montreal, Canada. Schulich School of Music, McGill University

    Google Scholar 

  9. Shelton R, Smith S, Hodgson T, Dexter D (2006) MathTrax

  10. Ben-Tal O, Berger J, Cook B, Daniels M, Scavone G (2002) SonART: The sonification application research toolbox. In: Nakatsu R, Kawahara H (eds) Proceedings of the 8th international conference on auditory display (ICAD2002), Kyoto, Japan, 2–5 July 2002. Advanced Telecommunications Research Institute (ATR)

    Google Scholar 

  11. Cassidy RJ, Berger J, Lee K, Maggioni M, Coifman RR (2004) Auditory display of hyperspectral colon tissue images using vocal synthesis models. In: Barrass S, Vickers P (eds) Proceedings of the 10th international conference on auditory display (ICAD2004), Sydney, Australia

    Google Scholar 

  12. Hermann T, Baier G, Stephani U, Ritter H (2006) Vocal sonification of pathologic EEG features. In: Stockman T (ed) Proceedings of the 12th international conference on auditory display, pp 158–163, London, UK, 06 2006. International Community for Auditory Display (ICAD), Department of Computer Science, Queen Mary, University of London, UK

    Google Scholar 

  13. Kleiman-Weiner M, Berger J (2006) The sound of one arm swinging: a model for multidimensional auditory display of physical motion. In: Stockman T, Nickerson LV, Frauenberger C, Edwards ADN, Brock D (eds) Proceedings of the 12th international conference on auditory display (ICAD2006), pp 278–280, London, UK. Department of Computer Science, Queen Mary, University of London, UK

    Google Scholar 

  14. Kramer G (1994) An introduction to auditory display. In: Kramer G (ed) Auditory display. Addison-Wesley, Reading, pp 1–79

    Google Scholar 

  15. McCartney J (2002) Rethinking the computer music language: SuperCollider. Comput Music J 26(4):61–68

    Article  MathSciNet  Google Scholar 

  16. Grond F, Bovermann T, Hermann T (2011) A supercollider class for vowel synthesis and its use for sonification. In: Worall D (ed) Proceedings of the 17th international conference on auditory display (ICAD-2011), Budapest, Hungary, June 20–24 2011. OPAKFI

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Florian Grond.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Grond, F., Hermann, T. Singing function. J Multimodal User Interfaces 5, 87–95 (2012). https://doi.org/10.1007/s12193-011-0068-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12193-011-0068-2

Keywords

Navigation