Abstract
This paper describes a recently created multimodal biometric corpus of spontaneous casual spoken interaction recorded at Trinity College Dublin, the University of Dublin, in Ireland, and currently being made available for wider dissemination. The paper focusses on the use of this corpus for training or learning about the needs and limitations of an interactive spoken dialogue interface for human-machine communication. Since the corpus is still very new and only recently released, the paper does not present research findings based on an analysis of the content but instead suggests methods and goals for annotating the material so that future researchers can use it to design more sensitive interfaces for speech synthesis in spoken dialogue systems. The paper is an extended version of an invited talk at the MA3HMI workshop.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Broadcasters or actors might be an exception to this general rule.
- 2.
Think of the various ways of saying the word ‘yes’ for example, and the wide range of different meanings they represent!
References
Hennig, S., Chellali, R., Campbell, N.: The D-ANS corpus: the Dublin-Autonomous Nervous System corpus of biosignal and multimodal recordings of conversational speech. In: Proceedings of the ELRA, the 9th Edition of the Language Resources and Evaluation Conference. Reykjavik, Iceland, pp. 26–31 (2014)
Beukelman, D.R.: There are some things you just can’t say with your right hand. Augmented and Assistive Communication (1989)
Pfeifer, R., Bongard, J., Grand, S.: How the Body Shapes the Way We Think: A New View of Intelligence. MIT Press, Cambridge (2007)
Affectiva Q-sensors. http://qsensor-support.affectiva.com
Dawson, M.E., Schell, A.M., Dilion, D.L.: The electrodermal system. In: Cacioppo, J.T., Tassinary, L.G., Berntson, G.G. (eds.) Handbook of Psychophysiology, pp. 159–181. Cambridge University Press, New York (2007)
Calvo, R.A., D’Mello, S.: Affect detection: an interdisciplinary review of models, methods, and their applications. IEEE Trans. Affect. Comput. 1(1), 18–37 (2010)
Rojc, M., Campbell, N.: Coverbal Synchrony in Human-Machine Interaction. CRC Press, Boca Raton (2013)
Acknowledgements
This work was carried out in the Speech Communication Lab at Trinity College Dublin and was supported by the SFI FastNet (project 09/IN.1/1263). The corpus collection was conducted as part of Shannon’s doctoral work, which was funded by Universita degli Studi di Genova and the Instituto Italiano di Tecnologia. The work was co-funded as part of the Japanese Government KAKEN research into MOSAIC: “Models of Spontaneous and Interactive Communication” We are grateful to Fred Cummins and Brian Vaughan and thankful for the annotation efforts of Emer Gilmartin and Celine De Looze.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Campbell, N., Hennig, S. (2015). Annotating the TCD D-ANS Corpus – A Multimodal Multimedia Monolingual Biometric Corpus of Spoken Social Interaction. In: Böck, R., Bonin, F., Campbell, N., Poppe, R. (eds) Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction. MA3HMI 2014. Lecture Notes in Computer Science(), vol 8757. Springer, Cham. https://doi.org/10.1007/978-3-319-15557-9_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-15557-9_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-15556-2
Online ISBN: 978-3-319-15557-9
eBook Packages: Computer ScienceComputer Science (R0)