Skip to main content

User and Noise Adaptive Dialogue Management Using Hybrid System Actions

  • Conference paper
Book cover Spoken Dialogue Systems for Ambient Environments (IWSDS 2010)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6392))

Included in the following conference series:

Abstract

In recent years reinforcement-learning-based approaches have been widely used for policy optimization in spoken dialogue systems (SDS). A dialogue management policy is a mapping from dialogue states to system actions, i.e. given the state of the dialogue the dialogue policy determines the next action to be performed by the dialogue manager. So-far policy optimization primarily focused on mapping the dialogue state to simple system actions (such as confirm or ask one piece of information) and the possibility of using complex system actions (such as confirm or ask several slots at the same time) has not been well investigated. In this paper we explore the possibilities of using complex (or hybrid) system actions for dialogue management and then discuss the impact of user experience and channel noise on complex action selection. Our experimental results obtained using simulated users reveal that user and noise adaptive hybrid action selection can perform better than dialogue policies which can only perform simple actions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bellman, R.: A markovian decision process. Journal of Mathematics and Mechanics 6, 679–684 (1957)

    MathSciNet  MATH  Google Scholar 

  2. Janarthanam, S., Lemon, O.: User simulations for online adaptation and knowledge-alignment in troubleshooting dialogue systems. In: In Proceedings of LONDial, London, UK (2008)

    Google Scholar 

  3. Larsson, S., Traum, D.R.: Information state and dialogue management in the TRINDI dialogue move engine toolkit. Natural Language Engineering 6, 323–340 (2000)

    Article  Google Scholar 

  4. Lemon, O., Georgila, K., Henderson, J., Stuttle, M.: An ISU dialogue system exhibiting reinforcement learning of dialogue policies: generic slot-filling in the TALK in-car system. In: Proceedings of the Meeting of the European Chapter of the Associaton for Computational Linguistics (EACL 2006), Morristown, NJ, USA (2006)

    Google Scholar 

  5. Lemon, O., Liu, X.: Dialogue Policy Learning for combinations of Noise and User Simulation: transfer results. In: Proceedings of SIGdial 2007, Antwerp, Belgium (2007)

    Google Scholar 

  6. Lemon, O., Pietquin, O.: Machine learning for spoken dialogue systems. In: Proceedings of the International Conference on Speech Communication and Technologies (InterSpeech 2007), Antwerpen, Belgium (2007)

    Google Scholar 

  7. Lemon, O., Liu, X.X., Shapiro, D., Tollander, C.: Hierarchical Reinforcement Learning of Dialogue Policies in a development environment for dialogue systems: REALL-DUDE. In: Proceedings of the 10th SemDial Workshop, BRANDIAL 2006, Potsdam, Germany (2006)

    Google Scholar 

  8. Levin, E., Pieraccini, R., Eckert, W.: A Stochastic Model of Human-Machine Interaction for learning dialog Strategies. IEEE Transactions on Speech and Audio Processing 8, 11–23 (2000)

    Article  Google Scholar 

  9. Levin, E., Pieraccini, R., Eckert, W.: Using markov decision process for learning dialogue strategies. In: Proceedings of ICASSP, Seattle, Washington (1998)

    Google Scholar 

  10. Kamm, C.A., Walker, M.A., Litman, D.J., Abella, A.: PARADISE: A framework for evaluating spoken dialogue agents. In: Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics (ACL 1997), Madrid, Spain, pp. 271–280 (1997)

    Google Scholar 

  11. Pietquin, O.: A Framework for Unsupervised Learning of Dialogue Strategies. PhD thesis, Faculté Polytechnique de Mons, TCTS Lab (Belgique) (April 2004)

    Google Scholar 

  12. Pietquin, O., Dutoit, T.: A Probabilistic Framework for Dialog Simulation and Optimal Strategy Learning. IEEE Transactions on Audio, Speech and Language Processing 14(2), 589–599 (2006)

    Article  Google Scholar 

  13. Pietquin, O., Dutoit, T.: A probabilistic framework for dialog simulation and optimal strategy learning. IEEE Transactions on Audio, Speech & Language Processing 14(2), 589–599 (2006)

    Article  Google Scholar 

  14. Pietquin, O., Renals, S.: ASR System Modeling For Automatic Evaluation And Optimization of Dialogue Systems. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2002), Orlando, USA, FL (May 2002)

    Google Scholar 

  15. Rieser, V.: Bootstrapping Reinforcement Learning-based Dialogue Strategies from Wizard-of-Oz data. PhD thesis, Saarland University, Dpt of Computational Linguistics (July 2008)

    Google Scholar 

  16. Rieser, V., Lemon, O.: Learning effective multimodal dialogue strategies from wizard-of-oz data: bootstrapping and evaluation. In: Proceedings of the Association for Computational Linguistics (ACL) 2008, Columbus, USA (2008)

    Google Scholar 

  17. Singh, S., Kearns, M., Litman, D., Walker, M.: Reinforcement learning for spoken dialogue systems. In: Proceedings of the Annual Meeting of the Neural Information Processing Society (NIPS 1999), Denver, USA. Springer, Heidelberg (1999)

    Google Scholar 

  18. Schatzmann, J., Weilhammer, K., Stuttle, M., Young, S.: A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies. Knowledge Engineering Review 21(2), 97–126 (2006)

    Article  Google Scholar 

  19. Schatzmann, J., Young, S.: Error simulation for training statistical dialogue systems. In: Proceedings of the ASRU 2007, Kyoto, Japan (2007)

    Google Scholar 

  20. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning, 3rd edn. The MIT Press, Cambridge (March 1998)

    Google Scholar 

  21. Williams, J.D., Young, S.: Partially observable markov decision processes for spoken dialog systems. Computer Speech Language 21(2), 393–422 (2007)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Chandramohan, S., Pietquin, O. (2010). User and Noise Adaptive Dialogue Management Using Hybrid System Actions. In: Lee, G.G., Mariani, J., Minker, W., Nakamura, S. (eds) Spoken Dialogue Systems for Ambient Environments. IWSDS 2010. Lecture Notes in Computer Science(), vol 6392. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-16202-2_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-16202-2_2

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-16201-5

  • Online ISBN: 978-3-642-16202-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics