Skip to main content

Multi-modal Dialogue State Tracking for Playing GuessWhich Game

  • Conference paper
  • First Online:
Artificial Intelligence (CICAI 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14473))

Included in the following conference series:

  • 153 Accesses

Abstract

GuessWhich is an engaging visual dialogue game that involves interaction between a Questioner Bot (QBot) and an Answer Bot (ABot) in the context of image-guessing. In this game, QBot’s objective is to locate a concealed image solely through a series of visually related questions posed to ABot. However, effectively modeling visually related reasoning in QBot’s decision-making process poses a significant challenge. Current approaches either lack visual information or rely on a single real image sampled at each round as decoding context, both of which are inadequate for visual reasoning. To address this limitation, we propose a novel approach that focuses on visually related reasoning through the use of a mental model of the undisclosed image. Within this framework, QBot learns to represent mental imagery, enabling robust visual reasoning by tracking the dialogue state. The dialogue state comprises a collection of representations of mental imagery, as well as representations of the entities involved in the conversation. At each round, QBot engages in visually related reasoning using the dialogue state to construct an internal representation, generate relevant questions, and update both the dialogue state and internal representation upon receiving an answer. Our experimental results on the VisDial datasets (v0.5, 0.9, and 1.0) demonstrate the effectiveness of our proposed model, as it achieves new state-of-the-art performance across all metrics and datasets, surpassing previous state-of-the-art models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Agrawal, A., et al.: VQA: visual question answering. In: ICCV, pp. 2425–2433 (2015)

    Google Scholar 

  2. Das, A., et al.: Visual dialog. In: CVPR, pp. 326–335 (2017)

    Google Scholar 

  3. Das, A., Kottur, S., Moura, J.M., Lee, S., Batra, D.: Evaluating visual conversational agents via cooperative human-AI games. In: HCOMP (2017)

    Google Scholar 

  4. Das, A., Kottur, S., Moura, J.M., Lee, S., Batra, D.: Learning cooperative visual dialog agents with deep reinforcement learning. In: ICCV, pp. 2951–2960 (2017)

    Google Scholar 

  5. Jang, E., Gu, S., Poole, B.: Categorical reparameterization with gumbel-softmax. In: ICLR (2017)

    Google Scholar 

  6. Lee, S.W., Gao, T., Yang, S., Yoo, J., Ha, J.W.: Large-scale answerer in questioner’s mind for visual dialog question generation. In: ICLR (2019)

    Google Scholar 

  7. Lu, J., Batra, D., Parikh, D., Lee, S.: ViLBERT: pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In: NeurIPS (2019)

    Google Scholar 

  8. Murahari, V., Chattopadhyay, P., Batra, D., Parikh, D., Das, A.: Improving generative visual dialog by answering diverse questions. In: EMNLP, pp. 1449–1454 (2019)

    Google Scholar 

  9. Paivio, A.: Imagery and verbal processes (1971)

    Google Scholar 

  10. Pang, W.: Multi-round dialogue state tracking by object-entity alignment in visual dialog. In: CICAI (Oral) (2023)

    Google Scholar 

  11. Pang, W., Wang, X.: Guessing state tracking for visual dialogue. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12361, pp. 683–698. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58517-4_40

    Chapter  Google Scholar 

  12. Pang, W., Wang, X.: Visual dialogue state tracking for question generation. In: Thirty-Fourth AAAI Conference on Artificial Intelligence, pp. 11831–11838. AAAI (Oral) (2020)

    Google Scholar 

  13. Shuster, K., Humeau, S., Bordes, A., Weston, J.: Image-chat: engaging grounded conversations. In: ACL, pp. 2414–2429 (2020)

    Google Scholar 

  14. Testoni, A., Shekhar, R., Fernández, R., aella Bernardi, R.: The devil is in the details: a magnifying glass for the guesswhich visual dialogue game. In: Proceedings of the 23rd SemDial Workshop on the Semantics and Pragmatics of Dialogue, pp. 15–24 (2019)

    Google Scholar 

  15. Vaswani, A., et al: Attention is all you need. In: NeurIPS (2017)

    Google Scholar 

  16. de Vries, H., Strub, F., Chandar, S., Pietquin, O., Larochelle, H., Courville, A.: GuessWhat?! visual object discovery through multi-modal dialogue. In: CVPR, pp. 4466–4475 (2017)

    Google Scholar 

  17. Zhang, Y., et al.: Generating informative and diverse conversational responses via adversarial information maximization. In: NeurIPS (2018)

    Google Scholar 

  18. Zhang, Y., Galley, M., Gao, J., Schwing, A., Forsyth, D.: Fast, diverse and accurate image captioning guided by part-of-speech. In: CVPR, pp. 10695–10704 (2019)

    Google Scholar 

  19. Zhao, L., Lyu, X., Song, J., Gao, L.: GuessWhich? visual dialog with attentive memory network. Pattern Recogn. 114, 107823 (2021)

    Article  Google Scholar 

  20. Zheng, D., Xu, Z., Meng, F., Wang, X., Wang, J., Zhou, J.: Enhancing visual dialog questioner with entity-based strategy learning and augmented guesser. In: Findings of EMNLP, pp. 1839–1851 (2021)

    Google Scholar 

  21. Zhou, M., Arnold, J., Yu, Z.: Building task-oriented visual dialog systems through alternative optimization between dialog policy and language generation. In: EMNLP, pp. 143–153 (2019)

    Google Scholar 

Download references

Acknowledgements

We thank the reviewers for their comments and suggestions. This paper was partially supported by the National Natural Science Foundation of China (NSFC 62076032), Huawei Noah’s Ark Lab, MoECMCC “Artificial Intelligence” Project (No. MCM20190701), Beijing Natural Science Foundation (Grant No. 4204100), and BUPT Excellent Ph.D. Students Foundation (No. CX2020309).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Pang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pang, W., Duan, R., Yang, J., Li, N. (2024). Multi-modal Dialogue State Tracking for Playing GuessWhich Game. In: Fang, L., Pei, J., Zhai, G., Wang, R. (eds) Artificial Intelligence. CICAI 2023. Lecture Notes in Computer Science(), vol 14473. Springer, Singapore. https://doi.org/10.1007/978-981-99-8850-1_45

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8850-1_45

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8849-5

  • Online ISBN: 978-981-99-8850-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics