Abstract
To assess the conversational proficiency of language learners, it is essential to samples that are representative of the learner’s full linguistic ability. This is realized through the adjustment of oral interview questions to the learner’s perceived proficiency level. An automatic system eliciting ratable samples must incrementally predict the approximate proficiency from a few turns of dialog and employ an adaptable question generation strategy according to this prediction. This study investigates the feasibility of such incremental adjustment of oral interview question difficulty during the interaction between a virtual agent and learner. First, we create an interview scenario with questions designed for different levels of proficiency and collect interview data using a Wizard-of-Oz virtual agent. Next, we build an incremental scoring model and analyze the accuracy. Finally, we discuss the future direction of automated adaptive interview system design.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
References
Liskin-Gasparro JE (2003) The ACTFL proficiency guidelines and the oral proficiency interview: a brief history and analysis of their survival. Foreign Lang Ann 36(4):483–490
Brown A (2003) Interviewer variation and the co-construction of speaking proficiency. Lang Test 20(1):1–25
Evanini K, Singh S, Loukina A, Wang X, Lee CM (2015) Content-based automated assessment of non-native spoken language proficiency in a simulated conversation. In: Machine learning for SLU & interaction, pp 1–7
Litman D, Young S, Gales M, Knill K, Ottewell K, Van Dalen R, Vandyke D (2016) Towards using conversations with spoken dialogue systems in the automated assessment of non-native speakers of English. In: Proceedings of SIGdial, pp 270–275
Ramanarayanan V, Mulholland M, Qian Y (2019) Scoring interactional aspects of human-machine dialog for language learning and assessment using text features. In: Proceedings of, pp 103–109
Dhonau S (2020) ACTFL oral proficiency—computer. Technical report
Neural approaches to automated speech scoring of monologue and dialogue responses. In: ICASSP, pp 8112–8116
Saeki M, Matsuyama Y, Kobashikawa S, Ogawa T, Kobayashi T (2021) Analysis of multimodal features for speaking proficiency scoring in an interview dialogue. In: 2021 IEEE spoken language technology workshop, SLT 2021—proceedings, pp 629–635
Council of Europe (2018) Common European framework of reference for languages: learning, teaching assessment. Cambridge University Press, Cambridge
Nakatsuhara F, Inoue C, Berry V, Galaczi E (2017) Exploring the use of video-conferencing technology in the assessment of spoken language: a mixed-methods study. Lang Assess Q 14(1):1–18
Zechner K, Evanini K (2020) Automated speaking assessment: using language technologies to score spontaneous speech. Routledge
Tono Y (2021) The CRFR-j wordlist version 1.6, tokyo university of foreign studies. http://www.cefr-j.org/index.html/. Accessed 1 June 2021
Negishi M, Takada T, Tono Y (2013) A progress report on the development of the CEFR-j. In: Exploring language frameworks: proceedings of the ALTE Kraków conference, pp 135–163
Kingma DP, Ba JL (2015) Adam: a method for stochastic optimization. In: Proceedings of ICLR, pp 1–15
Inoue K, Hara K, Lala D, Nakamura S, Takanashi K, Kawahara T (2019) A job interview dialogue system with autonomous android ERICA. In: Proceedings of IWSDS, pp 1–6
Inoue K, Lala D, Yamamoto K, Nakamura S, Takanashi K, Kawahara T (2020) An attentive listening system with android ERICA: comparison of autonomous and WOZ interactions. In: Proceedings of SIGdial, pp 118–127
Acknowledgements
This paper is based on results obtained from a project, JPNP20006 (“Online Language Learning AI Assistant that Grows with People”), subsidized by the New Energy and Industrial Technology Development Organization (NEDO).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Saeki, M., Demkow, W., Kobayashi, T., Matsuyama, Y. (2022). A WoZ Study for an Incremental Proficiency Scoring Interview Agent Eliciting Ratable Samples. In: Stoyanchev, S., Ultes, S., Li, H. (eds) Conversational AI for Natural Human-Centric Interaction. Lecture Notes in Electrical Engineering, vol 943. Springer, Singapore. https://doi.org/10.1007/978-981-19-5538-9_13
Download citation
DOI: https://doi.org/10.1007/978-981-19-5538-9_13
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-5537-2
Online ISBN: 978-981-19-5538-9
eBook Packages: Computer ScienceComputer Science (R0)