Skip to main content
Log in

A data-driven passing interaction model for embodied basketball agents

Journal of Intelligent Information Systems Aims and scope Submit manuscript

Abstract

Human beings have an ability to transition smoothly between individual and collaborative activities and to recognize these types of activity in other humans. Our long-term goal is to devise an agent which can function intelligently in an environment with frequent switching between individual and collaborative tasks. A basketball scenario is such an environment, however there currently do not exist suitable interactive agents for this domain. In this paper we take a step towards intelligent basketball agents by contributing a data-driven generalized model of passing interactions. We first collect data on human-human interaction in virtual basketball to discover patterns of behavior surrounding passing interactions. Through these patterns we produce a model of rotation behavior before and after passes are executed. We then implement this model into an actual basketball agent and then conduct an experiment with a human-agent team. Results show that the agent using the model can at least communicate better than a task-competent agent with limited communication, with participants rating the agent as being able to recognize and express its intention. In addition we analyze passing interactions using Herbert Clark’s joint activity theory and propose that the concepts, while completely theoretical, should be considered as a basis for agent design.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  • Anderson, M.L. (2003). Embodied cognition: A field guide. Artificial intelligence, 149(1), 91–130.

    Article  Google Scholar 

  • André, E., & Pelachaud, C. (2010). Interacting with embodied conversational agents. In Speech technology, Springer, pp 123–149.

  • Arias-Hernández, R., Dill, J., Fisher, B., & Green, T.M. (2011). Visual analytics and human-computer interaction. Interactions, 18(1), 51–55.

    Article  Google Scholar 

  • Bakkes, S., Spronck, P., & van den Herik, J. (2008). Rapid adaptation of video game ai. In IEEE Symposium On Computational Intelligence and Games, 2008. CIG ’08, pp 79–86.

  • Baur, T., Damian, I., Gebhard, P., Porayska-Pomsta, K., & André, E. (2013). A job interview simulation: Social cue-based interaction with a virtual character. In 2013 International Conference on Social Computing (SocialCom), IEEE (pp. 220–227).

  • Bee, N., Wagner, J., André, E., Vogt, T., Charles, F., Pizzi, D., & Cavazza, M. (2010). Discovering eye gaze behavior during human-agent conversation in an interactive storytelling application. In International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, ACM, ICMI-MLMI ’10, pp 9:1–9:8.

  • Bergmann, K., & Macedonia, M. (2013). A virtual agent as vocabulary trainer: Iconic gestures help to improve learners memory performance. In Intelligent Virtual Agents, Springer, 139–148.

  • Bevacqua, E., Mancini, M., Niewiadomski, R., & Pelachaud, C. (2007). An expressive ECA showing complex emotions. In Proceedings of the AISB annual convention, Newcastle, UK, 208–216.

  • Bevacqua, E., Prepin, K., Niewiadomski, R., de Sevin, E., & Pelachaud, C. (2010). Greta: Towards an interactive conversational virtual companion. In Artificial Companions in Society: perspectives on the Present and Future, 1–17.

  • Bianchi-Berthouze, N. (2013). Understanding the role of body movement in player engagement. Human-Computer Interaction, 28(1), 40–75.

    Google Scholar 

  • Bradshaw, J.M., Feltovich, P.J., Johnson, M., Bunch, L., Breedy, M.R., Eskridge, T.C., Jung, H., Lott, J., & Uszok, A. (2008). Coordination in human-agent-robot teamwork. In CTS, 467–476.

  • Bradshaw, J.M., Feltovich, P.J., Johnson, M., Breedy, M.R., Bunch, L., Eskridge, T.C., Jung, H., Lott, J., Uszok, A., & van Diggelen, J. (2009). From tools to teammates: Joint activity in human-agent-robot teams. In M. Kurosu (Ed.), HCI (10), Springer, Lecture Notes in Computer Science, (Vol. 5619 pp. 935–944).

  • Bradshaw, J.M., Feltovich, P., & Johnson, M. (2012). Human-agent interaction. Handbook of Human-Machine Interaction, 283–302.

  • Cassell, J. (2000). Embodied conversational interface agents. Communications of the ACM, 43(4), 70–78.

    Article  Google Scholar 

  • Cassell, J., Nakano, Y.I., Bickmore, T.W., Sidner, C.L., & Rich, C. (2001). Non-verbal cues for discourse structure. In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, Association for Computational Linguistics, 114–123.

  • Cassell, J., Vilhjálmsson, H.H., & Bickmore, T. (2004). Beat: the behavior expression animation toolkit. In: Life-Like Characters, Springer, 163–185.

  • Cavazza, M., de la Cámara, R.S., Turunen, M., Gil, JRn., Hakulinen, J., Crook, N., & Field, D. (2010). How was your day?: An affective companion ECA prototype. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Association for Computational Linguistics, SIGDIAL ’10, 277–280.

  • Chrisley, R. (2003). Embodied artificial intelligence. Artificial Intelligence, 149 (1), 131–150.

    Article  Google Scholar 

  • Clark, B., Fry, J., Ginzton, M., Peters, S., Pon-Barry, H., & Thomsen-Gray, Z. (2001). A multimodal intelligent tutoring system for shipboard damage control. In Proceedings of 2001 International Workshop on Information Presentation and Multimodal Dialogue (IPNMD-2001), 121–125.

  • Clark, H.H. (1996). Using Language: Cambridge University Press.

  • Cohn, G., Morris, D., Patel, S., & Tan, D. (2012). Humantenna: Using the body as an antenna for real-time whole-body interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, New York, NY, USA, CHI ’12, 1901–1910. doi:10.1145/2207676.2208330.

  • DeVault, D., Artstein, R., Benn, G., Dey, T., Fast, E., Gainer, A., Georgila, K., Gratch, J., Hartholt, A., Lhommet, M., Lucas, G., Marsella, S.C., Fabrizio, M., Nazarian, A., Scherer, S., Stratou, G., Suri, A., Traum, D., Wood, R., Xu, Y., Rizzo, A., & Morency, L.P. (2014). SimSensei kiosk: A virtual human interviewer for healthcare decision support. In Proceedings of the 13th Inter-national Conference on Autonomous Agents and Multiagent Systems (AAMAS 2014), International Foundation for Autonomous Agents and Multiagent Systems, 1061–1068.

  • Endrass, B., André, E., Rehm, M., Lipi, A.A., & Nakano, Y. (2011 ). Culture-related differences in aspects of behavior for virtual characters across germany and japan. In The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 2, International Foundation for Autonomous Agents and Multiagent Systems, AAMAS ’11, 441–448.

  • de Gelder, B. (2009). Why bodies? Twelve reasons for including bodily expressions in affective neuroscience. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1535), 3475–3484.

    Article  Google Scholar 

  • Gergle, D., Kraut, R.E., & Fussell, S.R. (2004). Language efficiency and visual technology minimizing collaborative effort with visual information. Journal of Language and Social Psychology, 23(4), 491–517.

    Article  Google Scholar 

  • Gratch, J., Rickel, J., Andr, E., Cassell, J., Petajan, E., & Badler, N.I. (2002). Creating interactive virtual humans: Some assembly required. IEEE Intelligent Systems, 17(4), 54–63.

    Article  Google Scholar 

  • Gruebler, A., & Suzuki, K. (2010). Measurement of distal emg signals using a wearable device for reading facial expressions. In Engineering in Medicine and Biology Society (EMBC), 2010 Annual International Conference of the IEEE, IEEE, 4594–4597.

  • Hoque, M.E., Courgeon, M., Martin, J.C., Mutlu, B., & Picard, R.W. (2013). Mach: My automated conversation coach. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing, ACM, 697–706.

  • Inden, B., Malisz, Z., Wagner, P., & Wachsmuth, I. (2013 ). Timing and entrainment of multimodal backchanneling behavior for an embodied conversational agent. In Proceedings of the 15th ACM on International Conference on Multimodal Interaction, ACM, ICMI ’13, 181–188.

  • jMonkeyEngine (2014). jMonkeyEngine 3.0. http://jmonkeyengine.org/, [Online; accessed 23-May-2014].

  • Johnson, M., Feltovich, P.J., & Bradshaw, J.M. (2008). R2 where are you? designing robots for collaboration with humans. Social Interaction with Intelligent Indoor Robots (SI3R).

  • Johnston, J. (2014). Htn and behaviour trees for improved coaching AI in RTS games. Game Behaviour, 1(1).

  • Kistler, F., André, E., Mascarenhas, S., Silva, A., Paiva, A., Degens, N., Hofstede, G.J., Krumhuber, E., Kappas, A., & Aylett, R. (2013). Traveller: An interactive cultural training system controlled by user-defined body gestures. In Human-Computer Interaction–INTERACT 2013, Springer, 697–704.

  • Klein, G., Woods, D.D., Bradshaw, J.M., Hoffman, R.R., & Feltovich, P.J. (2004). Ten challenges for making automation a” team player” in joint human-agent activity. IEEE Intelligent Systems, 19(6), 91–95.

    Article  Google Scholar 

  • Kleinsmith, A., & Bianchi-Berthouze, N. (2013). Affective body expression perception and recognition: A survey.IEEE Transactions on Affective Computing, 1–20.

  • Kopp, S., Jung, B., Leßmann, N., & Wachsmuth, I. (2003). Max - a multimodal assistant in virtual reality construction. KI - Künstliche Intelligenz, 4(03), 11–17.

    Google Scholar 

  • Lala, D. (2012). VISIE: A spatially immersive environment for capturing and analyzing body expression in virtual worlds.Masters thesis, Kyoto University.

  • Lala, D., Mohammad, Y., & Nishida, T. (2013). Unsupervised gesture recognition system for learning manipulative actions in virtual basketball. In Proceedings of the 1st International Conference on Human-Agent Interaction.

  • Lala, D., Mohammad, Y., & Nishida, T. (2014). A joint activity theory analysis of body interactions in multiplayer virtual basketball. In Proceedings of the 28th British HCI Conference, 62–71.

  • Lala, D., Nitschke, C., & Nishida, T. (2015). User perceptions of communicative and task-competent agents in a virtual basketball game. In S. Loiseau, J. Filipe, B. Duval, & J. van den Herik (Eds.), Proceedings of the 7th International Conference on Agents and Artificial Intelligence Scitepress., (Vol. 1, pp. 32–43).

  • Lance, B., & Marsella, S. (2010). Glances, glares, and glowering: how should a virtual human express emotion through gaze Autonomous Agents and Multi-Agent Systems, 20(1), 50–69.

    Article  Google Scholar 

  • Lukander, K., Jagadeesan, S., Chi, H., & Müller, K. (2013). Omg!: a new robust, wearable and affordable open source mobile gaze tracker. In Proceedings of the 15th international conference on Human-computer interaction with mobile devices and services, ACM, 408–411.

  • Marsella, S., Gratch, J., & Petta, P. (2010). Computational models of emotion. A Blueprint for Affective Computing-A sourcebook and manual 21–46.

  • Microsoft Corporation (2014). Kinect for Windows. http://www.microsoft.com/en-us/kinectforwindows/.

  • Monk, A. (2003). Common ground in electronically mediated communication: Clarks theory of language use. HCI models, theories, and frameworks: Toward a multidisciplinary science 265–289.

  • Morency, L.P., de Kok, I., & Gratch, J. (2008). Predicting listener backchannels: A probabilistic multimodal approach. In Intelligent Virtual Agents, Springer, 176–190.

  • Niewiadomski, R., Bevacqua, E., Mancini, M., & Pelachaud, C. (2009). Greta: an interactive expressive eca system. In Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, International Foundation for Autonomous Agents and Multiagent Systems, 1399–1400.

  • Nova, N., Sangin, M., & Dillenbourg, P. (2008). Reconsidering Clark’s theory in CSCW. In 8th International Conference on the Design of Cooperative Systems (COOP’08).

  • Peters, C., Pelachaud, C., Bevacqua, E., Mancini, M., & Poggi, I. (2005). A model of attention and interest using gaze behavior. In Intelligent virtual agents, Springer, 229–240.

  • Ruhland, K., Andrist, S., Badler, J., Peters, C., Badler, N., Gleicher, M., Mutlu, B., & Mcdonnell, R. (2014). Look me in the eyes: A survey of eye and gaze animation for virtual agents and artificial systems. In Eurographics State-of-the-Art Report, 69–91.

  • Sanger, J., Mller, V., & Lindenberger, U. (2012). Intra- and interbrain synchronization and network properties when playing guitar in duets. Frontiers in Human Neuroscience, 6(312).

  • Schönauer, C., Pintaric, T., & Kaufmann, H. (2011). Full body interaction for serious games in motor rehabilitation. In Proceedings of the 2nd Augmented Human International Conference, ACM, New York, NY, USA, AH ’11, 4:1–4:8. doi:10.1145/1959826.1959830.

  • Schroder, M., Pammi, S., Gunes, H., Pantic, M., Valstar, M.F., Cowie, R., McKeown, G., Heylen, D., Ter Maat, M., Eyben, F., & et al. (2011). Come and have an emotional workout with sensitive artificial listeners !. In 2011 IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011), IEEE, 646–646.

  • Schroder, M., Bevacqua, E., Cowie, R., Eyben, F., Gunes, H., Heylen, D., ter Maat, M., McKeown, G., Pammi, S., Pantic, M., Pelachaud, C., Schuller, B., de Sevin, E., Valstar, M., & Wollmer, M. (2012). Building autonomous sensitive artificial listeners. IEEE Transactions on Affective Computing, 3 (2), 165–183.

    Article  Google Scholar 

  • Schultz, K., Bratt, E.O., Clark, B., Peters, S., Pon-Barry, H., & Treeratpituk, P. (2003). A scalable, reusable spoken conversational tutor: Scot. In Proceedings of the AIED 2003 Workshop on Tutorial Dialogue Systems: With a View toward the Classroom, 367–377.

  • Shapiro, D.G., McCoy, J., Grow, A., Samuel, B., Stern, A., Swanson, R., Treanor, M., & Mateas, M. (2013). Creating playable social experiences through whole-body interaction with virtual characters. In AIIDE.

  • Standley, T.S. (2010). Finding optimal solutions to cooperative pathfinding problems. In AAAI, vol 1, 28–29.

  • Vertegaal, R., & Ding, Y. (2002). Explaining effects of eye gaze on mediated group conversations:: Amount or synchronization?. In Proceedings of the 2002 ACM Conference on Computer Supported Cooperative Work, ACM, CSCW ’02, 41–48.

  • Vinciarelli, A., Pantic, M., Heylen, D., Pelachaud, C., Poggi, I., D’Errico, F., & Schröder, M. (2012). Bridging the gap between social animal and unsocial machine: A survey of social signal processing. Affective Computing. IEEE Transactions on, 3(1), 69–87.

    Google Scholar 

  • Wilson, M.L., Chi, E.H., Reeves, S., & Coyle, D. (2014). RepliCHI:The Workshop II. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems, ACM, CHI EA ’14, 33–36.

  • Xiroku (2014). Xiroku Inc. http://www.xiroku.com/, [Online; accessed 23-May-2014].

  • Yngve, V.H. (1970). On getting a word in edgewise. In Chicago Linguistics Society, 6th Meeting, 567–578.

Download references

Acknowledgments

This research was supported by the Center of Innovation Program from Japan Science and Technology Agency, JST, AFOSR/AOARD Grant No. FA2386-14-1-0005 and JSPS Grant-in-Aid for Scientific Research (A) Number 24240023

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Divesh Lala.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lala, D., Nishida, T. A data-driven passing interaction model for embodied basketball agents. J Intell Inf Syst 48, 27–60 (2017). https://doi.org/10.1007/s10844-015-0386-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10844-015-0386-z

Keywords

Navigation