Skip to main content
Log in

Promoting human-AI interaction makes a better adoption of deep reinforcement learning: a real-world application in game industry

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Deep reinforcement learning (DRL) has been widely employed in game industry, mainly for building automatic game agents. While its performance and efficiency has significantly outperformed traditional approaches, the lack of model transparency constrains the interaction between the model and the human operators, thus degrading the practicality of DRL methods. In this paper, we propose to mitigate this human-AI interaction issue in a game industry scenario. Previously, existing methods need repetitive execution of DRL or are designed towards specific tasks, which are not applicable for our deployment scenario. Considering that different games could have different DRL AI agents, we hereby develop a post-hoc explanation framework which regards original DRL as a black-box model and can be applicable to any DRL based agents. Within the framework, a specially selected student model, which has been already well explored for model explanation, is employed to learn the decision policies of the trained DRL model. Then, by giving explanation information for the student model, indirect but practical explanation results can be obtained for original DRL model. Based on this information, the interaction between human and AI agents can be enhanced, benefiting deployment of DRL. Finally, based on the dataset from a real-world production game, we conduct experiments and user studies to illustrate the effectiveness of the proposed procedure from both objective and subjective perspectives.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data Availability

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Notes

  1. www.newzoo.com

  2. www.statista.com

  3. https://starcraft2.com/zh-tw/

  4. https://www.dota2.com/home?l=english

  5. https://en.wikipedia.org/wiki/Dou_dizhu

  6. https://qmx.163.com/

  7. https://www.ueq-online.org/

References

  1. Agius H, Daylamani-Zad D (2021) Guest editorial: interaction in immersive experiences. Multimed Tools Appl 80(20):30939–30942

    Article  Google Scholar 

  2. Amershi S, Fogarty J, Weld D (2012) Regroup: Interactive machine learning for on-demand group creation in social networks. In: Proceedings of the SIGCHI conference on human factors in computing systems, pp 21–30

  3. Amir D, Amir O (2018) Highlights: summarizing agent behavior to people. In: Proceedings of the 17th international conference on autonomous agents and multiagent systems, pp 1168–1176

  4. Anderson A, Dodge J, Sadarangani A et al (2019) Explaining reinforcement learning to mere mortals: an empirical study. In: Proceedings of the 28th international joint conference on artificial intelligence, pp 1328–1334

  5. Arrieta A B, Díaz-rodríguez N, Del Ser J et al (2020) Explainable artificial intelligence (xai): concepts, taxonomies, opportunities and challenges toward responsible ai. Inf Fusion 58:82–115

    Article  Google Scholar 

  6. Arulkumaran K, Deisenroth M P, Brundage M et al (2017) Deep reinforcement learning: a brief survey. IEEE Signal Proc Mag 34(6):26–38

    Article  Google Scholar 

  7. Berner C, Brockman G, Chan B et al (2019) Dota 2 with large scale deep reinforcement learning. arXiv:191206680

  8. Bhatt U, Xiang A, Sharma S et al (2020) Explainable machine learning in deployment. In: Proceedings of the 2020 conference on fairness, accountability, and transparency, pp 648–657

  9. Breiman L (1996) Bagging predictors. Mach Learn 24:123–140

    Article  Google Scholar 

  10. Carmigniani J, Furht B, Anisetti M et al (2011) Augmented reality technologies, systems and applications. Multimed Tools Appl 51(1):341–377

    Article  Google Scholar 

  11. Checa D, Bustillo A (2020) A review of immersive virtual reality serious games to enhance learning and training. Multimed Tools Appl 79(9):5501–5527

    Article  Google Scholar 

  12. Chen T, Guestrin C (2016) Xgboost: a scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pp 785–794

  13. Fails J A, Olsen D R Jr (2003) Interactive machine learning. In: Proceedings of the 8th international conference on intelligent user interfaces, pp 39–45

  14. Frid E, Gomes C, Jin Z (2020) Music creation by example. In: Proceedings of the 2020 CHI conference on human factors in computing systems, pp 1–13

  15. Ghorbani A, Wexler J, Zou J Y et al (2019) Towards automatic concept-based explanations. In: Advances in neural information processing systems, p 32

  16. Gillies M, Fiebrink R, Tanaka A et al (2016) Human-centred machine learning. In: Proceedings of the 2016 CHI conference extended abstracts on human factors in computing systems, CHI EA ’16. Association for Computing Machinery, New York, pp 3558–3565. https://doi.org/10.1145/2851581.2856492

  17. Greydanus S, Koul A, Dodge J et al (2018) Visualizing and understanding atari agents. In: International conference on machine learning, PMLR, pp 1792–1801

  18. He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  19. Heuillet A, Couthouis F, Díaz-Rodríguez N (2021) Explainability in deep reinforcement learning. Knowl-Based Syst 214:106,685. https://doi.org/10.1016/j.knosys.2020.106685. https://www.sciencedirect.com/science/article/pii/S0950705120308145

    Article  Google Scholar 

  20. Juozapaitis Z, Koul A, Fern A et al (2019) Explainable reinforcement learning via reward decomposition. In: Proceedings at the international joint conference on artificial intelligence. A workshop on explainable artificial intelligence

  21. Ke G, Meng Q, Finley T et al (2017) Lightgbm: a highly efficient gradient boosting decision tree. In: Advances in neural information processing systems, p 30

  22. Kuhn HW, Tucker AW (1953) Contributions to the theory of games, vol 2. Princeton University Press

  23. Kulesza T, Amershi S, Caruana R et al (2014) Structured labeling for facilitating concept evolution in machine learning. In: Proceedings of the SIGCHI conference on human factors in computing systems, pp 3075–3084

  24. Lage I, Ross A, Gershman S J et al (2018) Human-in-the-loop interpretability prior. In: Advances in neural information processing systems, p 31

  25. Laugwitz B, Held T, Schrepp M (2008) Construction and evaluation of a user experience questionnaire. In: Symposium of the austrian HCI and usability engineering group, springer, pp 63–76

  26. Lee LH, Braud T, Zhou P et al (2021) All one needs to know about metaverse: a complete survey on technological singularity, virtual ecosystem, and research agenda. arXiv:211005352

  27. Lesort T, Díaz-Rodríguez N, Goudou J F et al (2018) State representation learning for control: an overview. Neural Netw 108:379–392. https://doi.org/10.1016/j.neunet.2018.07.006. https://www.sciencedirect.com/science/article/pii/S0893608018302053

    Article  Google Scholar 

  28. Louie R, Coenen A, Huang C Z et al (2020) Novice-ai music co-creation via ai-steering tools for deep generative models. In: Proceedings of the 2020 CHI conference on human factors in computing systems, pp 1–13

  29. Lundberg S M, Lee S I (2017) A unified approach to interpreting model predictions. In: Advances in neural information processing systems, p 30

  30. Lundberg SM, Erion GG, Lee SI (2018) Consistent individualized feature attribution for tree ensembles. arXiv:180203888

  31. Lundberg SM, Erion GG, Chen H et al (2019) Explainable ai for trees: from local explanations to global understanding. CoRR arXiv:1905.04610

  32. Lundberg S M, Erion G, Chen H et al (2020) From local explanations to global understanding with explainable ai for trees. Nat Mach Intell 2(1):56–67

    Article  Google Scholar 

  33. Madumal P, Miller T, Sonenberg L et al (2020) Explainable reinforcement learning through a causal lens. In: Proceedings of the AAAI conference on artificial intelligence, pp 2493–2500

  34. Miller T (2019) Explanation in artificial intelligence: insights from the social sciences. Artif Intell 267:1–38

    Article  MathSciNet  Google Scholar 

  35. Oroojlooy A, Hajinezhad D (2022) A review of cooperative multi-agent deep reinforcement learning. Appl Intell 1–46

  36. Patel K, Fogarty J, Landay J A et al (2008) Investigating statistical machine learning as a tool for software development. In: Proceedings of the SIGCHI conference on human factors in computing systems, pp 667–676

  37. Perez-Liebana D, Liu J, Khalifa A et al (2019) General video game ai: a multitrack framework for evaluating agents, games, and content generation algorithms. IEEE Trans Games 11(3):195–214

    Article  Google Scholar 

  38. Powers R, Shoham Y (2004) New criteria and a new algorithm for learning in multi-agent systems. In: Advances in neural information processing systems, p 17

  39. Raffin A, Hill A, Traoré R et al (2019) Decoupling feature extraction from policy learning: assessing benefits of state representation learning in goal based robotics. In: SPIRL 2019: workshop on structure and priors in reinforcement learning at ICLR 2019

  40. Ramos G, Meek C, Simard P et al (2020) Interactive machine teaching: a human-centered approach to building machine-learned models. Hum–Comput Interact 35(5–6):413–451

    Article  Google Scholar 

  41. Ribeiro M T, Singh S, Guestrin C (2016) “Why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 1135–1144

  42. Sagi O, Rokach L (2018) Ensemble learning: a survey. Wiley Interdiscip Rev: Data Min Knowl Discov 8(4):e1249

    Google Scholar 

  43. Schapire R E (1999) A brief introduction to boosting. In: Ijcai, citeseer, pp 1401–1406

  44. Sequeira P, Gervasio M (2020) Interestingness elements for explainable reinforcement learning: Understanding agents’ capabilities and limitations. Artif Intell 288:103367

    Article  MathSciNet  Google Scholar 

  45. Shi W, Huang G, Song S et al (2020) Self-supervised discovering of interpretable features for reinforcement learning. IEEE Trans Pattern Anal Mach Intell PP:1–1. https://doi.org/10.1109/TPAMI.2020.3037898

    Article  Google Scholar 

  46. Shneiderman B (2020) Human-centered artificial intelligence: reliable, safe & trustworthy. Int J Hum–Comput Interact 36(6):495–504

    Article  Google Scholar 

  47. Silver D, Huang A, Maddison C J et al (2016) Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484–489

    Article  Google Scholar 

  48. Silver D, Hubert T, Schrittwieser J et al (2018) A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science 362 (6419):1140–1144

    Article  MathSciNet  Google Scholar 

  49. Sundararajan M, Najmi A (2020) The many shapley values for model explanation. In: International conference on machine learning, PMLR, pp 9269–9278

  50. Vinyals O, Babuschkin I, Czarnecki W M et al (2019) Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature 575 (7782):350–354

    Article  Google Scholar 

  51. Vouros GA (2022) Explainable deep reinforcement learning: state of the art and challenges. ACM Comput Surv https://doi.org/10.1145/3527448, just Accepted

  52. Wiegreffe S, Pinter Y (2019) Attention is not not explanation. In: Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, pp 11–20. https://doi.org/10.18653/v1/D19-1002. https://aclanthology.org/D19-1002

  53. Yang G, Liu M, Hong W et al (2022) Perfectdou: dominating doudizhu with perfect information distillation. In: NeurIPS

  54. Zha D, Xie J, Ma W et al (2021) Douzero: mastering doudizhu with self-play deep reinforcement learning. In: International conference on machine learning, PMLR, pp 12333–12344

  55. Zhang M, Vikram S, Smith L et al (2019) Solar: deep structured representations for model-based reinforcement learning. In: International conference on machine learning, PMLR, pp 7444–7453

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haoyu Liu.

Ethics declarations

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, Z., Liu, H., Xiong, Y. et al. Promoting human-AI interaction makes a better adoption of deep reinforcement learning: a real-world application in game industry. Multimed Tools Appl 83, 6161–6182 (2024). https://doi.org/10.1007/s11042-023-15361-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-15361-6

Keywords

Navigation