Abstract
In order for an agent to achieve its objectives, make sound decisions, communicate and collaborate with others effectively it must have high quality representations. Representations can encapsulate objects, situations, experiences, decisions and behavior just to name a few. Our interest is in designing high quality representations, therefore it makes sense to ask of any representation; what does it represent; why is it represented; how is it represented; and importantly how well is it represented. This paper identifies the need to develop a better understanding of the grounding process as key to answering these important questions. The lack of a comprehensive understanding of grounding is a major obstacle in the quest to develop genuinely intelligent systems that can make their own representations as they seek to achieve their objectives. We develop an innovative framework which provides a powerful tool for describing, dissecting and inspecting grounding capabilities with the necessary flexibility to conduct meaningful and insightful analysis and evaluation. The framework is based on a set of clearly articulated principles and has three main applications. First, it can be used at both theoretical and practical levels to analyze grounding capabilities of a single system and to evaluate its performance. Second, it can be used to conduct comparative analysis and evaluation of grounding capabilities across a set of systems. Third, it offers a practical guide to assist the design and construction of high performance systems with effective grounding capabilities.
Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.References
Agnew, N., Brownlow, P., Dissanayake, G., Hartanto, Y., Heinitz, S., Karol, A., et al. (2003). Robot soccer world cup 2003: The power of uts unleashed!
Albus J., Barbera A. (2005) RCS: A cognitive architecture for intelligent multi-agent systems. Annual Reviews in Control 29(1): 89–99
Alchourron C.E., Gardenfors P., Makinson D. (1985) On the logic of theory change: Partial meet contraction and revision functions. The Journal of Symbolic Logic 50: 510–530
Barsalou L. (1999) Perceptual symbol systems. Behavioral and Brain Sciences 22: 577–609
Brennan S. (1998) The grounding problem in conversations with and through computers. In: Fussell S.R., Kreuz R.J. (eds) Social and cognitive psychological approaches to interpersonal communication. Lawrence Erlbaum, Hillsdale, NJ, pp 201–225
Brooks R.A. (1990) Elephants don’t play chess. Robotics and Autonomous Systems 6: 3–15
Brooks, R. A. (1991a). Intelligence without reason. In Proceedings of 12th international joint conference on artificial intelligence, Sydney, Australia, pp. 569–595.
Brooks R.A. (1991b) Intelligence without representation. Artificial Intelligence 47: 139–160
Brooks R.A. (1991c) New approaches to robotics. Science 253: 1227–1232
Cahn, J., & Brennan, S. (1999). A psychological model of grounding and repair in dialog. In Proceedings of AAAI fall symposium on psychological models of communication in collaborative systems, American Association for Artificial Intelligence, North Falmouth, MA, pp. 25–33.
Cangelosi A., Harnard S. (2001) The adaptive advantage of symbolic theft over sensorimotor toil: Grounding language in perceptual categories. Evolution of Communication 4(1): 117–142
Chalmers, D. (1992). Subsymbolic computation and the Chinese room. In J. Dinsmore (Ed.), The symbolic and connectionist paradigms: Closing the gap. Lawrence Erlbaum.
Chang, M., Dissanayake, G., Gurram, D., Hadzic, F., Healy, G., Karol, A., et al. (2004). Robot world cup soccer 2004: The magic of uts unleashed!
Christiansen, M., & Chater, N. (1993). Symbol grounding—the emperor’s new theory of meaning? In Proceedings of the 15th annual conference of the cognitive science society, pp. 155–160.
Clark H., Brennan S. (1991) Grounding in communication. In: Resnick L.B., Levine J.M., Teasley S.D. (eds) Perspectives on socially shared cognition. APA, Washington, DC, pp 127–149
Clark H., Schaefer E. (1989) Contributing to discourse. Cognitive Science 13: 259–294
Cohen, P., Oates, T., Beal, C., & Adams, N. (2002). Contentful mental states for robot baby. In Eighteenth national conference on artificial intelligence, American Association for Artificial Intelligence, Menlo Park, CA, USA, pp. 126–131.
Coradeschi, S., & Saffiotti, A. (2000). Anchoring symbols to sensor data: Preliminary report. In AAAI/IAAI, pp. 129–135.
Coradeschi, S., & Saffiotti, A. (2003). An introduction to the anchoring problem. Robotics and Autonomous Systems, 43(2–3), 85–96. Special issue on perceptual anchoring. Online at http://www.aass.oru.se/Agora/RAS02/.
Dreyfus H. (1993) What computers still can’t do: A critique of artificial reason, 3rd edn. The MIT Press, London, England
Dubois, D., & Prade, H. (1994). Possibilistic logic. In D. M. Gabbay, C. J. Hogger, J. A. Robinson & J. H. Siekmann (Eds.), Handbook of logic in artificial intelligence and logic programming (pp. 439–513). Oxford University Press.
Dudek G., Jenkin M. (2000) Computational principles of mobile robotic, 1st edn. Cambridge University Press, London, England
Floridi L. (2004) Open problems in the philosophy of information. Metaphilosophy 35: 554–582
Gärdenfors P. (2003) How Homo became Sapiens: On the evolution of thinking. Oxford University Press, London
Gärdenfors P., Williams M.A. (2007) Multi-agent communication, planning, and collaboration. In: Schalley A.C., Khlentzos D. (eds) Mental states: Language and cognitive structure. John Benjamins Pub Co., Amsterdam, pp 197–253
Gibson J.J. (1979) The ecological approach to visual perception. Houghton-Mifflin, Boston
Glenberg A., Kaschak M. (2002) Grounding language in action. Psychonomic Bulletin and Review 9: 558–565
Goldstein, E. B. (Ed.). (2006). Sensation and perception. Wadsworth Publishing.
Grazino M., Hu T., Cross C. (1997) Coding the locations of objects in the dark. Science 277: 239–241
Harnad S. (1990) The symbol grounding problem. Physica 42: 335–346
Harnad, S. (2003). The symbol grounding problem. In Encyclopedia of cognitive science. London: Nature Publishing Group/Macmillan.
Johnston, B., & Williams, M. A. (2009). A formal framework for the symbol grounding problem. In The proceedings of the second conference on artificial general intelligence. http://www.agi-09.org/papers/paper_20.pdf.
Karol, A., Nebel, B., Stanton, C., & Williams, M. A. (2003). Case-based game-play in the robocup four-legged league. In D. Polani, B. Browning, A. Bonarini, & K. Yoshida (Eds.), RoboCup 2003: Robot soccer world cup VII (pp. 739–747). Springer.
Kennedy, C. (1998). A conceptual foundation for autonomous learning in unforeseen situations. In Proceedings of the IEEE international symposium on intelligent control (ISIC/CIRA/ISIS’98), Gaithersburg, Maryland.
MacLennan B. (1993) Grounding analog computers. Think 2(1): 48–52
Mayo, M. (2003). Symbol grounding and its implication for artificial intelligence. In Twenty-sixth Australian computer science conference, pp. 55–60.
McCarthy, J. (1956). The inversion of functions defined by turing machines. In C. E. Shannon & J. McCarthy (Eds.), Automata studies. Annals of mathematical studies (Vol. 34, pp. 177–181). Princeton University Press.
McCarthy J. (1980) Circumscription—a form of non-monotonic reasoning. Artificial Intelligence 13: 27–39
McCarthy, J. (1998). Elaboration tolerance. In Working papers of the fourth international symposium on logical formalizations of commonsense reasoning.
McCarthy, J., & Hayes, P. (1969). Some philosophical problems from the standpoint of artificial intelligence. In B. Meltzer, & D. Michie (Eds.), Machine intelligence (Vol. 4, pp. 463–502). Edinburgh: Edinburgh University Press.
Pastra, K. (2004a). Viewing vision-language integration as a double-grounding case. In Proceedings of the American Association of Artificial Intelligence (AAAI) fall symposium on “Achieving human-level intelligence through integrated systems and research”, Washington D.C., USA.
Pastra, K. (2004b). Vision-language integration: A double-grounding case. PhD thesis, Department of Computer Science, University of Sheffield, UK.
Pfeifer R., Verschure P. (1995) The challenge of autonomous agents: Pitfalls and how to avoid them. In: Steels L., Brooks R. (eds) The artificial life route to artificial intelligence. Lawrence Erlbaum, Hillsdale, New Jersey, pp 237–263
Prem, E. (1995). Dynamic symbol grounding, state construction and the problem of teleology. In J. Mira & F. Sandoval (Eds.), From natural to artificial neural computation. International workshop on artificial neural networks. Proceedings (pp. 619–626). Berlin, Germany: Springer-Verlag. http://citeseer.ist.psu.edu/prem95dynamic.html.
Prince, C. (2001). Theory grounding in embodied artificially intelligent systems. In Proceedings of the first international workshop on epigenetic robotics: Modeling cognitive development in robotic systems, Lund, Sweden.
Roy D. (2005) Semiotic schemas: A framework for grounding language in action and perception. Artificial Intelligence 167(1–2): 170–205
Searle J.R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3: 417–457
Stanton, C. (2007). Grounding oriented design. PhD thesis, UTS.
Stanton, C., & Williams, M. A. (2005). A novel and practical approach towards color constancy for mobile robots using overlapping color space signatures. In RoboCup 2005, pp. 444–451.
Steels L. (1995) When are robots intelligent autonomous agents? Journal of Robotics and Autonomous Systems 15: 3–9
Steels L. (1996) Perceptually grounded meaning creation. In: Tokoro M. (eds) Proceedings of the international conference on multi-agent systems. MIT Press, Cambridge, MA
Steels L. (2001) Language games for autonomous robots. IEEE Intelligent Systems 16(5): 16–22
Steels L. (2003) Evolving grounded communication for robots. Trends in Cognitive Science 7(7): 308–312
Steels, L. (2007). The symbol grounding problem is solved, so what’s next? In M. D. Vega, G. Glennberg, & G. Graesser (Eds.), Symbols, embodiment and meaning. New Haven: Academic Press. http://www.isrl.uiuc.edu/~amag/langev/paper/steels07symbolGrounding.html.
Steels, L., & Kaplan, F. (2002). Bootstrapping grounded word semantics. In Linguistic evolution through language acquisition: Formal and computational models (pp. 53–74). Cambridge University Press.
Steels L., Vogt P. (1997) Grounding adaptive language games in robotic agents. In: Harvey I., Husbands P. (eds) Advances in artificial life. Proceedings of the fourth European conference on artificial Life. MIT Press, Cambridge, MA
Sun R. (2000) Symbol grounding: A new look at an old idea. Philosophical Psychology 13(2): 149–172
Taddeo M., Floridi L. (2005) Solving the symbol grounding problem: A critical review of fifteen years of research. Journal of Experimental and Theoretical Artificial Intelligence 17: 419–445
Tarski A. (1944) The semantic conception of truth and the foundations of semantics. Philosophy and Phenomenological Research 4: 13–47
Tolkien, J. R. R. (1937). The hobbit. Ballantine Books.
Vogt P. (2002) The physical symbol grounding problem. Cognitive Systems Research Journal 3(3): 429–457
Williams, M. A. (1994). On the logic of theory base change. In JELIA, pp. 86–105.
Williams, M. A. (1998). Applications of belief revision. Transactions and Change in Logic Databases, 1472(1), 285–314. http://research.it.uts.edu.au/magic/Mary-Anne/publications/BeliefRevisionApplicationsM-AWilliams.pdf.
Williams M.A. (2008) Representation = Grounded Information. In: Ho T.B., Zhou Z.H. (eds) Trends in artificial intelligence: 10th Pacific rim international conference on artificial intelligence. Lecture Notes in Computer Science. Springer, Germany
Ziemke, T. (1999). Rethinking grounding. Understanding representation in the cognitive sciences (pp. 177–190). New York: Plenum Press.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Williams, MA., McCarthy, J., Gärdenfors, P. et al. A grounding framework. Auton Agent Multi-Agent Syst 19, 272–296 (2009). https://doi.org/10.1007/s10458-009-9082-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10458-009-9082-0