skip to main content
10.1145/1329125.1329132acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaamasConference Proceedingsconference-collections
research-article

What can i do with this?: finding possible interactions between characters and objects

Published: 14 May 2007 Publication History

Abstract

Virtual environments are often populated by autonomous synthetic agents capable of acting and interacting with other agents as well as with humans. These virtual worlds also include objects that may have different uses and types of interactions. As such, these agents need to identify possible interactions with the objects in the environment and measure the consequences of these interactions. This is particularly difficult when the agents never interacted with some of the objects beforehand. This paper describes SOTAI - Smart ObjecT-Agent Interaction, a framework that will help agents to identify possible interactions with unknown objects based on their past experiences. In SOTAI, agents can learn world regularities, like object attributes and frequent relations between attributes. They gather qualitative symbolic descriptions from their sensorial data when interacting with objects and perform inductive reasoning to acquire concepts about them. We implemented an initial case study and the results show that our agents are able to acquire valid conceptual knowledge.

References

[1]
Abaci, T., Ciger, J., Thalmann, D.: "Action Semantics in Smart Objects" In: Proceedings of the Workshop towards Semantic Virtual Environments, Villars, Switzerland pp.121--126 (2005)
[2]
Abaci, T., Ciger, J., Thalmann, D.: "Planning with Smart Objects" In: WSCG (Short Papers) 2005: 25--28
[3]
Cavazza, M., Hartley, S., Lugrin, J., and Le Bras, M. 2004. "Qualitative physics in virtual environments". In Proceedings of the 9th international Conference on intelligent User interface (Funchal, Madeira, Portugal, January 13--16, 2004). IUI '04. ACM Press, New York, NY, 54--61. DOI= http://doi.acm.org/10.1145/964442.964454
[4]
Cavazza, M., Charles F., Mead S., "Agents' Interaction in Virtual Storytelling", Lecture Notes in Computer Science, Volume 2190, Jan 2001, Page 156
[5]
Cavazza, M., Charles, F., Mead, S. J., "AI-based Animation for Interactive Storytelling", Proceedings of Computer Animation, 2001. IEEE Computer Society Press.
[6]
Cohen, P. R., Atkin, M., Oates, T., and Beal, C. R. Neo: Learning conceptual knowledge by sensorimotor interaction with an environment. In Proceedings of the First International Conference on Autonomous Agents (1997), pp. 170--177.
[7]
Cohen, P. R., Adams, N., Oates, T., and Beal, C. R. "Contentful mental states for robot baby". In Proceedings of the Eighteenth National Conference on Artificial Intelligence, 2002.
[8]
Cos-Aguilera, I., Cañamero, L., Hayes, G. M., "Learning Object Functionalities in the Context of Behaviour Selection", in Proceedings of the 4th British Conference Towards Intelligent Autonomous Robots (TIMR'03), August 2003
[9]
Cos-Aguilera, I., Cañamero, L., Hayes, G. M., "Using a SOFM to learn Object Affordances", In Proceedings of the 5th Workshop of Physical Agents (WAF'04), Girona, Catalonia, Spain. March, 2004. DRAFT
[10]
Forbus, K., "Qualitative reasoning". CRC Hand-book of Computer Science and Engineering. 1996, CRC Press.
[11]
Gibson, J. J. (1979). "The Ecological Approach to Visual Perception". Boston: Houghton Mifflin.
[12]
Gonçalves, L. M., Kallmann, M., and Thalmann, D., "Programming Behaviors with Local Perception and Smart Objects: An Approach to Solve Autonomous Agents Tasks" In Proceedings of the XIV Brazilian Symposium on Computer Graphics and Image Processing (Sibgrapi'01) (October 15--18, 2001). SIBGRAPI. IEEE Computer Society, Washington, DC, 184.
[13]
Gonçalves, L. M., Kallmann, M., and Thalmann, D., "Defining Behaviors for Autonomous Agents based on Local Perception and Smart Objects" In Computer & Graphics, 26(6):887 897, December 2002
[14]
Kallmann, M., Thalmann, D., "Modeling Objects for Interactive Tasks" In: EGCAS'98 - 9th Eurographics Workshop on Animation and Simulation, Lisbon (1998)
[15]
Kallmann, M., "Object Interaction in Real-Time Virtual Environments", DSc Thesis number 2347, Swiss Federal Institute of Technology (EPFL), January 2001.
[16]
Levison L., Geib C., Moore M., "Sodajack: an architecture for agents that search for and manipulate objects" In: Technical Report MS-CIS-94-16, University of Pennsylvania, Dept. of Computer and Information Science, Philadelphia, PA, 1994.
[17]
Levison, L., "Connecting Planning and Acting: Towards an Architecture for Object-Specific Reasoning" In: Dissertation proposal, 1994.
[18]
McGrenere J., Ho, W., "Affordances: Clarifying and evolving a concept". In Proceedings of graphics interface (2000, May) (pp. 179--186), Montreal, Canada, Retrieved September 1, 2003, from http://www.graphicsinterface.org/proceedings/2000/177/
[19]
Viezzer, M. "The Autonomous Acquisition of Affordance Concepts: Exploring a Neighbourhood in Design Space", Technical Report CSRP-03-3, School of Computer Science, University of Birmingham, October 2003.
[20]
Viezzer, M., Nieuwenhuis, C. M. "Learning affordance concepts: some seminal ideas", In IJCAI05 Workshop on Modeling Natural Action Selection.

Cited By

View all

Index Terms

  1. What can i do with this?: finding possible interactions between characters and objects

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    AAMAS '07: Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
    May 2007
    1585 pages
    ISBN:9788190426275
    DOI:10.1145/1329125
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    • IFAAMAS

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 14 May 2007

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. believable qualities
    2. human-like
    3. learning agents
    4. lifelike
    5. synthetic agents

    Qualifiers

    • Research-article

    Conference

    AAMAS07
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 1,155 of 5,036 submissions, 23%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)3
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 05 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2017)ALET: Agents Learning their Environment through TextComputer Animation and Virtual Worlds10.1002/cav.175928:3-4Online publication date: 21-Apr-2017
    • (2015)Automated Generation of Plausible Agent Object InteractionsIntelligent Virtual Agents10.1007/978-3-319-21996-7_32(295-309)Online publication date: 1-Aug-2015
    • (2013)Contact Surface GraphProceedings of the 2013 International Symposium on Ubiquitous Virtual Reality10.1109/ISUVR.2013.11(1-4)Online publication date: 10-Jul-2013
    • (2010)Real-time sensory pattern mining for autonomous agentsProceedings of the 6th international conference on Agents and data mining interaction10.5555/1880493.1880502(71-83)Online publication date: 11-May-2010
    • (2008)Learning to interactProceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 310.5555/1402821.1402845(1257-1260)Online publication date: 12-May-2008

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media