Skip to main content

Engaging Museum Visitors with AI-Generated Narration and Gameplay

  • Conference paper
  • First Online:
ArtsIT, Interactivity and Game Creation (ArtsIT 2022)

Abstract

It has become a challenge in recent years to raise interest in museum visits, especially among younger visitors, as the range of alternative entertainment options has become overwhelming and increasingly attractive, interactive, and playful. To re-engage a wide audience with art and cultural heritage, we propose to use artificial intelligence to make the presented artworks more interesting. By using natural language processing to generate a narrative around a selection of individual exhibits and presenting the story as a scavenger hunt, we connect the individual exhibits and make access more playful. The museum visitors are guided through the story by two characters, who also pose challenges to be solved in mini-games. The two characters were chosen as a living being (a puppy) and an embodied agent (a humanoid robot) to indicate whether an utterance is preformulated and fact-based (puppy) or generated and partially made up. By testing the prototype, we could confirm that the generated stories are plausible and exciting, that the participants became more interested in the presented items through the story and mini-games and that the participants could distinguish if the utterance was fact-based or fictional.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://reactjs.org/.

  2. 2.

    https://flask.palletsprojects.com/en/2.0.x/.

  3. 3.

    https://www.landesmuseum.de/.

References

  1. The Met \(\times \) Microsoft \(\times \) MIT. https://www.metmuseum.org/about-the-met/policies-and-documents/open-access/met-microsoft-mit

  2. DeepL API (2016). https://www.deepl.com/de/docs-api/introduction/

  3. Individuelle Übersetzungen mit der Glossarfunktion von DeepL (2022). https://www.deepl.com/de/blog/translate-your-way-with-the-deepl-glossary

  4. Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. Trans. Assoc. Comput. Linguist. 5, 135–146 (2017)

    Article  Google Scholar 

  5. Bönisch, D.: The curator’s machine: clustering of museum collection data through annotation of hidden connection patterns between artworks. Int. J. Digit. Art Hist. (5), 5.20–5.35 (2021). https://doi.org/10.11588/dah.2020.5.75953

  6. Brockman, G., Murati, M., Welinder, P., OpenAI: OpenAI API (2020). https://openai.com/blog/openai-api/

  7. Carr, S., Francis, M., Rivlin, L., Stone, A.: Public Space. Cambridge Series in Environment and Behavior. Cambridge University Press (1992). https://www.cambridge.org/de/academic/subjects/psychology/social-psychology/public-space?format=PB

  8. Chan, A.: GPT-3 and InstructGPT: technological dystopianism, utopianism, and “Contextual” perspectives in AI ethics and industry (2022). https://doi.org/10.1007/s43681-022-00148-6

  9. Friedrich, G.: DeepL: so funktioniert die Übersetzung (2021). https://www.heise.de/tipps-tricks/DeepL-So-funktioniert-die-Uebersetzung-6048772.html

  10. Gaia, G., Boiano, S., Borda, A.: Engaging museum visitors with AI: the case of chatbots. In: Giannini, T., Bowen, J.P. (eds.) Museums and Digital Culture. SSCC, pp. 309–329. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-97457-6_15

    Chapter  Google Scholar 

  11. Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2414–2423 (2016). https://doi.org/10.1109/CVPR.2016.265

  12. Gehman, S., Gururangan, S., Sap, M., Choi, Y., Smith, N.A.: RealToxicityPrompts: evaluating neural toxic degeneration in language models. CoRR abs/2009.11462 (2020). https://arxiv.org/abs/2009.11462

  13. He, K., Gkioxari, G., Dollár, P., Girshick, R.B.: Mask R-CNN. CoRR abs/1703.06870 (2017). https://arxiv.org/abs/1703.06870

  14. Hermann, I.: Artificial intelligence in fiction: between narratives and metaphors (2021). https://doi.org/10.1007/s00146-021-01299-6

  15. Jocher, G., et al.: ultralytics/YOLOv5: v6.1 - TensorRT, TensorFlow Edge TPU and OpenVINO Export and Inference (2022). https://doi.org/10.5281/zenodo.6222936

  16. Lawson, C.A., Cook, M., Dorn, J., Pariso, B.: A STEAM-focused program to facilitate teacher engagement before, during, and after a fieldtrip visit to a children’s museum. J. Museum Educ. 43(3), 236–244 (2018). https://doi.org/10.1080/10598650.2018.1474421

    Article  Google Scholar 

  17. Lim, P.: Bringing black and white photos to life using Colourise.sg - a deep learning colouriser trained with old Singaporean photos (2019). https://blog.data.gov.sg/bringing-black-and-white-photos-to-life-using-colourise-sg-435ae5cc5036

  18. Lu, F.: What AI is doing for the museum and cultural space today (2021). https://jingculturecommerce.com/y-lab-art-of-ai-takeaways/

  19. Memarovic, N., Langheinrich, M., Alt, F., Elhart, I., Hosio, S., Rubegni, E.: Using public displays to stimulate passive engagement, active engagement, and discovery in public spaces. In: Proceedings of the 4th Media Architecture Biennale Conference: Participation, pp. 55–64 (2012)

    Google Scholar 

  20. Müller, J., Alt, F., Michelis, D., Schmidt, A.: Requirements and design space for interactive public displays. In: Proceedings of the 18th ACM International Conference on Multimedia, pp. 1285–1294 (2010)

    Google Scholar 

  21. Mullins, J.K., Sabherwal, R.: Gamification: a cognitive-emotional view. 106, 304–314 (2020). https://doi.org/10.1016/j.jbusres.2018.09.023, https://linkinghub.elsevier.com/retrieve/pii/S0148296318304739

  22. Radford, A., Narasimhan, K.: Improving language understanding by generative pre-training (2018)

    Google Scholar 

  23. Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., Lee, H.: Generative adversarial text to image synthesis (2016). https://doi.org/10.48550/ARXIV.1605.05396, https://arxiv.org/abs/1605.05396

  24. Sagar, R.: OpenAI Releases GPT-3, The Largest Model So Far, 31 July 2020

    Google Scholar 

  25. Schaffler, S., Ruß, A., Sasse, M.L., Schubotz, L., Gustke, O.: Questions and answers: important steps to Let AI chatbots answer questions in the museum. In: Wölfel, M., Bernhardt, J., Thiel, S. (eds.) ArtsIT 2021. LNICST, vol. 422, pp. 346–358. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-95531-1_24

    Chapter  Google Scholar 

  26. Staff, M.: Museum curator job guide: how to become a museum curator (2022). https://www.masterclass.com/articles/museum-curator-job-guide#4-types-of-museum-curators

  27. Staff, T.: Museum curator (2021). https://theartcareerproject.com/careers/museum-curator/

  28. Thomas Lilge, C.S.: Ping! Die Museumsapp. Stiftung Humboldt Forum im Berliner Schloss (2020). https://www.museum4punkt0.de/ergebnis/ping-die-museumsapp/

  29. Torrey, L., Shavlik, J.: Transfer Learning. Handbook of Research on Machine Learning Applications (2009). https://doi.org/10.4018/978-1-60566-766-9.ch011

  30. Villaespesa, E.: Diving into the museum’s social media stream. Analysis of the Visitor Experience in 140 Characters. https://mw2013.museumsandtheweb.com/paper/diving-into-the-museums-social-media-stream/

  31. Villaespesa, E., French, A.: AI, visitor experience, and museum operations: a closer look at the possible, pp. 101–113 (2019)

    Google Scholar 

  32. Wölfel, M.: Kinetic space: 3D-gestenerkennung für dich und mich. Konturen Ausgabe 31, 58–63 (2012)

    Google Scholar 

  33. Wölfel, M., Purps, C.F., Percifull, N.: Enabling embodied conversational agents to respond to nonverbal behavior of the communication partner. In: Kurosu, M. (ed.) HCII 2022. LNCS, vol. 13304, pp. 591–604. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-05412-9_40

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wladimir Hettmann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hettmann, W., Wölfel, M., Butz, M., Torner, K., Finken, J. (2023). Engaging Museum Visitors with AI-Generated Narration and Gameplay. In: Brooks, A.L. (eds) ArtsIT, Interactivity and Game Creation. ArtsIT 2022. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 479. Springer, Cham. https://doi.org/10.1007/978-3-031-28993-4_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-28993-4_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-28992-7

  • Online ISBN: 978-3-031-28993-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics