Abstract
Chant and cantillation research is particularly interesting as it explores the transition from oral to written transmission of music. The goal of this work to create web-based computational tools that can assist the study of how diverse recitation traditions, having their origin in primarily non-notated melodies, later became codified. One of the authors is a musicologist and music theorist who has guided the system design and development by providing manual annotations and participating in the design process. We describe novel content-based visualization and analysis algorithms that can be used for problem-seeking exploration of audio recordings of chant and recitations.
Similar content being viewed by others
Notes
Archived Examples from Hungarian Academy of Science (1968–1973).
Archived Examples from Hungary and Morocco from the Feher Music Center at the Bet Hatfatsut, Tel Aviv, Israel.
Godehard Joppich and Singphoniker: Gregorian Chant from St. Gallen (Gorgmarienhtte: CPO 999267-2, 1994).
Examples from Indonesia and Egypt: in Approaching the Koran (Ashland: White Cloud, 1999).
One cent is 1/100 of a semitone, corresponding to a frequency difference of about 0.06%.
Thinking statistically, our scale is related to a distribution given the relative probability of each possible pitch. We can think of each F0 estimate (i.e each sampled value of the F0 envelope) as a sample drawn from this unknown distribution so our problem becomes one of estimation the unknown distribution given the samples.
References
Boersma P (2001) Praat, a system for doing phonetics by computer. Glot Int 5(9/10):341–345
Camacho A (2007) A sawtooth waveform inspired pitch estimator for speech and music. PhD thesis, University of Florida
Dannenberg RB, Birmingham WP, Pardo B, Hu N, Meek C, Tzanetakis G (2007) A comparative evaluation of search techniques for query-by-humming using the musart testbed. J Am Soc Inf Sci Technol 58(5):687–701
Duggan B, O’ Shea B, Cunningham P (2008) A system for automatically annotating traditional irish music field recordings. In: Int workshop on content-based multimedia indexing (CBMI). IEEE, Piscataway
Ghias A, Logan J, Chamberlin D, Smith BC (1995) Query by humming: musical information retrieval in an audio database. In: MULTIMEDIA ‘95: proceedings of the third ACM international conference on multimedia. ACM, New York, pp 231–236
Hanna P, Ferraro P (2007) Polyphonic music retrieval by local edition of quotiented sequences. In: Int workshop on content-based multimedia indexing (CBMI). IEEE, Piscataway
Hauptman A, Witbrock M (1997) Informedia: news-on-demand multimedia information acquisition and retrieval. MIT, Cambridge
Hauptman A et al (2003) Informedia at trec 2003: analyzing and searching broadcast news video. In: Proc of (VIDEO) TREC 2003, Gaithersburg, MD
Karp T (1998) Aspects of orality and formularity in Gregorian chant. Northwestern University Press, Evanston
Kodaly Z (1960) Folk music of Hungary. Corvina, Budapest
Krumhansl CL (1990) Cognitive foundations of musical pitch. Oxford University Press, Oxford
Levy K (1998) Gregorian chant and the Carolingians. Princeton University Press, Princeton
Nelson K (1985) The art of reciting the Koran. University of Texas Press, Austin
Ness S, Wright M, Martins L, Tzanetakis G (2008) Chants and orcas: semi-automatic tools for audio annotation and analysis in niche domains. In: Proc ACM multimedia, Vancouver, Canada
Treitler L (1982) The early history of music writing in the west. J Am Musicol Soc 35
Tzanetakis G (2008) Marsyas-0.2: a case study in implementing music information retrieval systems, chapter 2. In: Shen S, Shepherd J, Cui B, Liu L (eds) Intelligent music information systems: tools and methodologies. Information science reference, pp 31–49
Tzanetakis G, Schloss KAW, Wright M (2007) Computational ethnomusicology. J Interdiscip Music Studies, 1(2), 2007
Wigoder G et al (1989) Masora, the encyclopedia of Judaism. MacMillan, New York
Zimmermann H (2000) Untersuchungen zur Musikauffassung des rabbinischen Judentums. Peter Lang, Bern
Acknowledgements
We would like to thank Matt Wright for initial work on this project and Emiru Tsunoo for the Marsyas implementation of dynamic time warping and similarity matrix computation used in the paper. We would also like to thank the National Sciences and Engineering Research Council (NSERC) and Social Sciences and Humanities Research Council (SSHRC) of Canada for their financial support.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Ness, S.R., Biró, D.P. & Tzanetakis, G. Computer-assisted cantillation and chant research using content-aware web visualization tools. Multimed Tools Appl 48, 207–224 (2010). https://doi.org/10.1007/s11042-009-0357-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-009-0357-x