Abstract
Dialogue act classification is an important step in understanding students’ utterances within tutorial dialogue systems. Machine-learned models of dialogue act classification hold great promise, and among these, unsupervised dialogue act classifiers have the great benefit of eliminating the human dialogue act annotation effort required to label corpora. In contrast to traditional evaluation approaches which judge unsupervised dialogue act classifiers by accuracy on manual labels, we present results of a study to evaluate the performance of these models with respect to their performance within end-to-end system evaluation. We compare two versions of the tutorial dialogue system for introductory computer science: one that relies on a supervised dialogue act classifier and one that depends on an unsupervised dialogue act classifier. A study with 51 students shows that both versions of the system achieve similar learning gains and user satisfaction. Additionally, we show that some incoming student characteristics are highly correlated with students’ perceptions of their experience during tutoring. This first end-to-end evaluation of an unsupervised dialogue act classifier within a tutorial dialogue system serves as a step toward acquiring tutorial dialogue management models in a fully automated, scalable way.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Bloom, B.S.: The 2 sigma problem: the search for methods of group instruction as effective as one-to-one tutoring, p. 4–16. Educational Researcher (1984)
Chen, G., Gully, S.M., Eden, D.: Validation of a new general self-efficacy scale. Organizational Research Methods 4(1), 62–83 (2001)
Chen, L., Eugenio, B.D.: Multimodality and dialogue act classification in the RoboHelper project. In: Proceedings of the Annual SIGDIAL Meeting, pp. 183–192 (2013)
DMello, S., Williams, C., Hays, P., Olney, A.: Individual differences as predictors of learning and engagement. In: Proceedings of the Annual Meeting of the Cognitive Science Society pp. 308–313 (2009)
Dzikovska, M., Steinhauser, N., Farrow, E., Moore, J., Campbell, G.: BEETLE II: Deep natural language understanding and automatic feedback generation for intelligent tutoring in basic electricity and electronics. IJAIED 24(3), 284–332 (2014)
Evens, M.W., Chang, R.-C., Lee, Y. H., Shim, L.S., Woo, C.W., Zhang, Y., Michael, J.A., Rovick, A.A.: CIRCSIM-Tutor: An intelligent tutoring system using natural language dialogue. In: Proceedings of Applied Natural Language Processing, pp. 13–14 (1997)
Ezen-Can, A., Boyer, K.E.: Combining task and dialogue streams in unsupervised dialogue act models. In: Proceedings of the Annual SIGDIAL Meeting, pp. 113–122 (2014)
Fossati, D., Di Eugenio, B., Brown, C., Ohlsson, S.: Learning linked lists: experiments with the iList system. In: Woolf, B.P., Aïmeur, E., Nkambou, R., Lajoie, S. (eds.) ITS 2008. LNCS, vol. 5091, pp. 80–89. Springer, Heidelberg (2008)
Graesser, A.C., Person, N.K., Magliano, J.P.: Collaborative dialogue patterns in naturalistic one-to-one tutoring. Applied Cognitive Psychology 9(6), 495–522 (1995)
Ha, E.Y., Grafsgaard, J.F., Mitchell, C.M., Boyer, K.E., Lester, J.C.: Combining verbal and nonverbal features to overcome the ‘information gap’ in task-oriented dialogue. In: Proceedings of the Annual SIGDIAL Meeting on Discourse and Dialogue, pp. 247–256 (2012)
Hall, M., National, H., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The WEKA data mining software: An update. ACM SIGKDD Explorations Newsletter 11(1), 10–18 (2009)
Jackson, G.T., Graesser, A.C., McNamara, D.S.: What students expect may have more impact than what they know or feel. In: Proceedings of AIED, pp. 73–80 (2009)
Jordan, P., Albacete, P., Ford, M.J., Katz, S., Lipschultz, M., Litman, D., Silliman, S., Wilson, C.: Interactive event: the Rimac tutor - a simulation of the highly interactive nature of human tutorial dialogue. In: Lane, H.C., Yacef, K., Mostow, J., Pavlik, P. (eds.) AIED 2013. LNCS, vol. 7926, pp. 928–929. Springer, Heidelberg (2013)
Lane, H.C., VanLehn, K.: Teaching the tacit knowledge of programming to novices with natural language tutoring. Computer Science Education 15(3), 183–201 (2005)
Lee, C., Bobko, P.: Self-efficacy beliefs: Comparison of five measures. Journal of Applied Psychology 79(3), 364 (1994)
Litman, D., Silliman, S.: ITSPOKE: An intelligent tutoring spoken dialogue system. Demonstration Papers at HLT-NAACL 2004, 5–8 (2004)
Nye, B.D., Graesser, A.C., Hu, X.: AutoTutor and family: A review of 17 years of natural language tutoring. IJAIED 24(4), 427–469 (2014)
Rus, V., Moldovan, C., Niraula, N., Graesser, A.C.: Automated discovery of speech act categories in educational games. In: Proceedings of EDM, pp. 25–32 (2012)
Stolcke, A., Ries, K., Coccaro, N., Shriberg, E., Bates, R., Jurafsky, D., Taylor, P., Martin, R., Van Ess-Dykema, C., Meteer, M.: Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational Linguistics 26(3), 339–373 (2000)
Vail, A.K., Boyer, K.E.: Identifying effective moves in tutoring: on the refinement of dialogue act annotation schemes. In: Trausan-Matu, S., Boyer, K.E., Crosby, M., Panourgia, K. (eds.) ITS 2014. LNCS, vol. 8474, pp. 199–209. Springer, Heidelberg (2014)
VandeWalle, D., Cron, W.L., Slocum Jr, J.W.: The role of goal orientation following performance feedback. Journal of Applied Psychology 86(4), 629 (2001)
VanLehn, K., Graesser, A.C., Jackson, G.T., Jordan, P., Olney, A., Rosé, C.P.: When are tutorial dialogues more effective than reading? Cognitive Science 31(1), 3–62 (2007)
VanLehn, K., Jordan, P.W., Penstein Rosé, C., Bhembe, D., Böttner, M., Gaydos, A., Makatchev, M., Pappuswamy, U., Ringenberg, M.A., Roque, A.C., Siler, S., Srivastava, R.: The architecture of Why2-Atlas: a coach for qualitative physics essay writing. In: Cerri, S.A., Gouardéres, G., Paraguaçu, F. (eds.) ITS 2002. LNCS, vol. 2363, p. 158. Springer, Heidelberg (2002)
VanLehn, K., Lynch, C., Schulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., Wintersgill, M.: The Andes physics tutoring system: Lessons learned. IJAIED 15(3), 147–204 (2005)
Walker, M.A., Litman, D.J., Kamm, C.A., Abella, A.: PARADISE: a framework for evaluating spoken dialogue agents. In: Proceedings of the European Chapter of the Association for Computational Linguistics, pp. 271–280 (1997)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Ezen-Can, A., Boyer, K.E. (2015). A Tutorial Dialogue System for Real-Time Evaluation of Unsupervised Dialogue Act Classifiers: Exploring System Outcomes. In: Conati, C., Heffernan, N., Mitrovic, A., Verdejo, M. (eds) Artificial Intelligence in Education. AIED 2015. Lecture Notes in Computer Science(), vol 9112. Springer, Cham. https://doi.org/10.1007/978-3-319-19773-9_11
Download citation
DOI: https://doi.org/10.1007/978-3-319-19773-9_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-19772-2
Online ISBN: 978-3-319-19773-9
eBook Packages: Computer ScienceComputer Science (R0)