Skip to main content

A WoZ Study for an Incremental Proficiency Scoring Interview Agent Eliciting Ratable Samples

  • Conference paper
  • First Online:
Conversational AI for Natural Human-Centric Interaction

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 943))

Abstract

To assess the conversational proficiency of language learners, it is essential to samples that are representative of the learner’s full linguistic ability. This is realized through the adjustment of oral interview questions to the learner’s perceived proficiency level. An automatic system eliciting ratable samples must incrementally predict the approximate proficiency from a few turns of dialog and employ an adaptable question generation strategy according to this prediction. This study investigates the feasibility of such incremental adjustment of oral interview question difficulty during the interaction between a virtual agent and learner. First, we create an interview scenario with questions designed for different levels of proficiency and collect interview data using a Wizard-of-Oz virtual agent. Next, we build an incremental scoring model and analyze the accuracy. Finally, we discuss the future direction of automated adaptive interview system design.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 279.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://unity.com/.

References

  1. Liskin-Gasparro JE (2003) The ACTFL proficiency guidelines and the oral proficiency interview: a brief history and analysis of their survival. Foreign Lang Ann 36(4):483–490

    Article  Google Scholar 

  2. Brown A (2003) Interviewer variation and the co-construction of speaking proficiency. Lang Test 20(1):1–25

    Article  Google Scholar 

  3. Evanini K, Singh S, Loukina A, Wang X, Lee CM (2015) Content-based automated assessment of non-native spoken language proficiency in a simulated conversation. In: Machine learning for SLU & interaction, pp 1–7

    Google Scholar 

  4. Litman D, Young S, Gales M, Knill K, Ottewell K, Van Dalen R, Vandyke D (2016) Towards using conversations with spoken dialogue systems in the automated assessment of non-native speakers of English. In: Proceedings of SIGdial, pp 270–275

    Google Scholar 

  5. Ramanarayanan V, Mulholland M, Qian Y (2019) Scoring interactional aspects of human-machine dialog for language learning and assessment using text features. In: Proceedings of, pp 103–109

    Google Scholar 

  6. Dhonau S (2020) ACTFL oral proficiency—computer. Technical report

    Google Scholar 

  7. Neural approaches to automated speech scoring of monologue and dialogue responses. In: ICASSP, pp 8112–8116

    Google Scholar 

  8. Saeki M, Matsuyama Y, Kobashikawa S, Ogawa T, Kobayashi T (2021) Analysis of multimodal features for speaking proficiency scoring in an interview dialogue. In: 2021 IEEE spoken language technology workshop, SLT 2021—proceedings, pp 629–635

    Google Scholar 

  9. Council of Europe (2018) Common European framework of reference for languages: learning, teaching assessment. Cambridge University Press, Cambridge

    Google Scholar 

  10. Nakatsuhara F, Inoue C, Berry V, Galaczi E (2017) Exploring the use of video-conferencing technology in the assessment of spoken language: a mixed-methods study. Lang Assess Q 14(1):1–18

    Article  Google Scholar 

  11. Zechner K, Evanini K (2020) Automated speaking assessment: using language technologies to score spontaneous speech. Routledge

    Google Scholar 

  12. Tono Y (2021) The CRFR-j wordlist version 1.6, tokyo university of foreign studies. http://www.cefr-j.org/index.html/. Accessed 1 June 2021

  13. Negishi M, Takada T, Tono Y (2013) A progress report on the development of the CEFR-j. In: Exploring language frameworks: proceedings of the ALTE Kraków conference, pp 135–163

    Google Scholar 

  14. Kingma DP, Ba JL (2015) Adam: a method for stochastic optimization. In: Proceedings of ICLR, pp 1–15

    Google Scholar 

  15. Inoue K, Hara K, Lala D, Nakamura S, Takanashi K, Kawahara T (2019) A job interview dialogue system with autonomous android ERICA. In: Proceedings of IWSDS, pp 1–6

    Google Scholar 

  16. Inoue K, Lala D, Yamamoto K, Nakamura S, Takanashi K, Kawahara T (2020) An attentive listening system with android ERICA: comparison of autonomous and WOZ interactions. In: Proceedings of SIGdial, pp 118–127

    Google Scholar 

Download references

Acknowledgements

This paper is based on results obtained from a project, JPNP20006 (“Online Language Learning AI Assistant that Grows with People”), subsidized by the New Energy and Industrial Technology Development Organization (NEDO).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mao Saeki .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Saeki, M., Demkow, W., Kobayashi, T., Matsuyama, Y. (2022). A WoZ Study for an Incremental Proficiency Scoring Interview Agent Eliciting Ratable Samples. In: Stoyanchev, S., Ultes, S., Li, H. (eds) Conversational AI for Natural Human-Centric Interaction. Lecture Notes in Electrical Engineering, vol 943. Springer, Singapore. https://doi.org/10.1007/978-981-19-5538-9_13

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-5538-9_13

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-5537-2

  • Online ISBN: 978-981-19-5538-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics