Skip to main content

Issues on Aligning the Meaning of Symbols in Multiagent Systems

  • Chapter
New Challenges in Computational Collective Intelligence

Part of the book series: Studies in Computational Intelligence ((SCI,volume 244))

Abstract

The autonomy of a multiagent system in relation to external environment can be greatly extended thorough the incorporation of a language emergence mechanism. In such a system the population of agents autonomously learn, adapt and optimize their semantics to the available mechanisms of perception and the external environment, i.e. it dynamically adapts the used language to suit the shape of external world, assumed perception mechanism and intra-population interactions. For instance, used symbols should denote only the directly available states of the external world, as otherwise the symbols have no meaning to the agents. Further, the incorporated language sign, denoting certain meaning, representation can be adapted to suit the demands of communication, e.g. by lowering the energy utilization – shorter signs should denote more frequent symbols. Additionally, the proposed approach to language emergence is applied in the area of tagging systems, where it helps to solve and automate several problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Harnad, S.: The Symbol Grounding Problem. Physica D 42, 335–346 (1990)

    Article  Google Scholar 

  2. Lorkiewicz, W., Katarzyniak, R.P.: Representing the meaning of symbols in autonomous agents. In: ACIIDS 2009, pp. 183–189 (2009)

    Google Scholar 

  3. Katarzyniak, R.P.: Grounding Crisp and Fuzzy Ontological Concepts in Artificial Cognitive Agents. In: Gabrys, B., Howlett, R.J., Jain, L.C. (eds.) KES 2006. LNCS (LNAI), vol. 4253, pp. 1027–1034. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  4. Katarzyniak, R., Nguyen, N.T., Jain, L.C.: A Model for Fuzzy Grounding of Modal Conjunctions in Artificial Cognitive Agents. In: Nguyen, N.T., Jo, G.-S., Howlett, R.J., Jain, L.C. (eds.) KES-AMSTA 2008. LNCS (LNAI), vol. 4953, pp. 341–350. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  5. Katarzyniak, R.: The Language Grounding Problem and its Relation to the Internal Structure of Cognitive Agents. J. UCS 11(2), 357–374 (2005)

    Google Scholar 

  6. Pierce, C.S.: Collected Papers, vol. I-VIII. Harvard University Press, Cambridge (1932-1958)

    Google Scholar 

  7. Vogt, P.: The emergence of compositional structures in perceptually grounded language games. Artificial Intelligence 167(1-2), 206–242 (2005)

    Article  Google Scholar 

  8. Spelke, E.S.: Innateness, learning, and the development of object representation. Developmental Science 2, 145–148 (1999)

    Article  Google Scholar 

  9. de Saussure, F.: Course in General Linguistics, trans. Roy Harris (La Salle, Ill.: Open Court, 1983)

    Google Scholar 

  10. Steels, L.: Fifty Years of AI: From Symbols to Embodiment - and Back. In: 50 Years of Artificial Intelligence 2006, pp. 18–28 (2006)

    Google Scholar 

  11. Steels, L., Hanappe, P.: Interoperability through Emergent Semantics. A Semiotic Dynamics Approach. Journal on Data Semantics (2006)

    Google Scholar 

  12. Ogden, C.K., Richards, I.A.: The Meaning of Meaning, 8th edn. Brace & World, Inc., New York (1923)

    Google Scholar 

  13. Whorf, B.: Language, thought, and reality. MIT Press, Cambridge (1956)

    Google Scholar 

  14. ASL (Applied Science Laboratories), Eye Tracking System Instructions ASL Eye-Trac 6000 Pan/Tilt Optics, EyeTracPanTiltManual.pdf, ASL Version 1.04 01/17/2006

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Lorkiewicz, W., Katarzyniak, R.P. (2009). Issues on Aligning the Meaning of Symbols in Multiagent Systems. In: Nguyen, N.T., Katarzyniak, R.P., Janiak, A. (eds) New Challenges in Computational Collective Intelligence. Studies in Computational Intelligence, vol 244. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-03958-4_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-03958-4_19

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-03957-7

  • Online ISBN: 978-3-642-03958-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics