Skip to main content

Guarantees for autonomy in cognitive agent architecture

  • Conference paper
  • First Online:
Intelligent Agents (ATAL 1994)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 890))

Included in the following conference series:

Abstract

The paper analyses which features of an agent architecture determine its Autonomy. I claim that Autonomy is a relational concept. First, Autonomy from environment (stimuli) is analysed, and the notion of Cognitive Reactivity is introduced to show how the cognitive architecture of the agent guarantees Stimulus-Autonomy and deals with the “Descartes problem” relative to the external “causes” of behaviour. Second, Social Autonomy is analysed (Autonomy from others). A distinction between Executive Autonomy and Motivational Autonomy is introduced. Some limitations that current postulates on Rational interacting agents could impose on their Autonomy are discussed. Architectural properties and postulates that guarantee a sufficient Autonomy in cognitive social agents are defined. These properties give the agent control over its own mental states (Beliefs and Goals). In particular, a “double filter” architecture against influence, is described. What guarantees agent's control over its own Beliefs is specified: relevance, credibility, introspective competence. Particular attention is devoted to the “non negotiability of beliefs” (Pascal law): the fact that you cannot change the other's Beliefs by using promises or threats. What guarantees agent's control over its Goals is specified: self-interested goal adoption, and indirect influencing. Finally, it is argued how and why social dependence and power relations should limit the agent's Autonomy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Basso, A., Mondada, F., Castelfranchi, C. 1993. Reactive Goal Activation in Intelligent Autonomous Agent Architecture. In Proceedings of AIA'93 — First International Round-Table on “Abstract Intelligent Agent”, ENEA, Roma, January 25–27.

    Google Scholar 

  2. Brooks, R.A. 1989. A robot that walks. Emergent behaviours from a carefully evolved network. Tech. Rep. Artificial Intelligence Laboratory. Cambridge, Mass.: MIT.

    Google Scholar 

  3. C. Castelfranchi. No More Cooperation, Please! In Search of the Social Structure of Verbal Interaction. In A.I. and Cognitive Science Perspectives on Communication, Ortony, A., Slack, J., and Stock, O. (eds.) Heidelberg, Germany: Springer, 1992.

    Google Scholar 

  4. C. Castelfranchi. Principles of Bounded Autonomy. Modelling Autonomous Agents in a Multi Agent World. Pre-Proceedings of the Fist Round-Table Discussion on “Abstract Intelligent Agent” — AIA '93. Roma: Enea; TR-Ip/CNR, 1993.

    Google Scholar 

  5. C. Castelfranchi, M. Miceli, A. Cesta. Dependence relations among autonomous agents. In Decentralized AI — 3, Y. Demazeau, E. Werner (eds), 215–31. Amsterdam: Elsevier, 1992.

    Google Scholar 

  6. N. Chomsky. Language and Problems of Knowledge: The Nicaraguan Lectures, Cambridge, Mass. MIT Press, 1988.

    Google Scholar 

  7. P. R. Cohen, H.J. Levesque. Rational Interaction as the Basis for Communication. Technical Report, n.89, CSLI, Stanford, 1987. A new version in Intentions in Communication., P.R Cohen, J. Morgan, M.A. Pollack (eds), 33–71. Cambridge, Mass.: MIT Press, 1990.

    Google Scholar 

  8. P.R. Cohen, H. J. Levesque. Intention is choice with commitment. Artificial Intelligence, 42, pp. 213–261, 1990.

    Google Scholar 

  9. D. Connah, P. Wavish. An Experiment in Cooperation. In Decentralized AI, Y. Demazeau and J.P. Mueller (eds.), 197–212. North-Holland, Elsevier, 1990.

    Google Scholar 

  10. R. Conte, C. Castelfranchi. Cognitive and Social Action. London: UCL Press, (in press).

    Google Scholar 

  11. A.A. Covrigaru, R.K. Lindsay. Deterministic autonomous systems. AI Magazine, Fall, 1991, 110–17.

    Google Scholar 

  12. A.F. Dragoni. A Model for Belief Revision in a Multi-Agent Environment. In Decentralized AI — 3, Y. Demazeau, E. Werner (eds), 215–31. Amsterdam: Elsevier, 1992.

    Google Scholar 

  13. E. H. Durfee, V. R. Lesser, D. D. Corkill. Cooperation through communication in a problem-solving network. In Distributed Artificial Intelligence, M.N. Huhns (ed.), 29–58. San Mateo, CA: Kaufmann, 1987.

    Google Scholar 

  14. J. R. Galliers. A strategic framework for multi-agent cooperative dialogue, in Proceedings of the 8th European Conference on Artificial Intelligence London: Pitman, 1988, 415–20.

    Google Scholar 

  15. J. R. Galliers. Modelling Autonomous Belief Revision in Dialogue, In Decentralized AI-2, Y. Demazeau, J.P. Mueller (eds), 231–43. Armsterdam: Elsevier, 1991.

    Google Scholar 

  16. P. Gardenfors.The Dynamics of Belief Systems: Foundations vs Coherence Theories. Revue Internationale de Philosophie, 1989.

    Google Scholar 

  17. G. Gaspar. Communication and Belief Changes in a Society of Agents. In Decentralized AI-2, Y. Demazeau, J.P. Mueller (eds), 245–55. Armsterdam: Elsevier, 1991.

    Google Scholar 

  18. N. R. Jennings. Commitments and conventions: The foundation of coordination in multi-agent systems. The Knowledge Engineering Review, 3, 1993, 223–50.

    Google Scholar 

  19. Hewitt, C.& P. de Jong 1983. Open systems. In Perspectives on conceptual modeling, M.L. Brodie, J.L. Mylopoulos, J.W. Schmidt (eds), New York: Springer.

    Google Scholar 

  20. C. Hewitt. Open information systems semantics for distributed artificial intelligence. Artificial Intelligence, 47, 1991, 79–106.

    Google Scholar 

  21. B. A. Huberman, S. H. Clearwater, T. Hogg. Cooperative Solution of Constraint Satisfaction Problems. Science, vol. 254, Nov. 1991, 1181–2.

    Google Scholar 

  22. G. Kiss, H. Reichgelt. Towards a Semantics of Desires. In Decentralized AI-3, E. Werner & Y. Demazeau (eds.), 115–27. Armsterdam: Elsevier, 1992.

    Google Scholar 

  23. I. Levi. The Enterprise of Knowledge, MIT Press, Cambridge, Mass, 1980.

    Google Scholar 

  24. R. S. Michalski. Development of Learning Systems Able to Modify Their Knowledge Representation According to Their Goals. In AIA'94 — Second International Round-Table on “Abstract Intelligent Agent”, ENEA, Roma, February 1994, 23–25.

    Google Scholar 

  25. G. Miller, E. Galanter, K.H. Pribram. Plans and the structure of behavior, New York: Holt, Rinehart & Winston, 1960.

    Google Scholar 

  26. A. Rosenblueth, N. Wiener. Purposeful and Non-Purposeful Behavior. In Modern systems research for the behavioral scientist, Buckley, W. (ed.). Chicago: Aldine, 1968.

    Google Scholar 

  27. J. Sichman, R. Conte, R. C. Castelfranchi, Y. Demazeau. A Social Reasoning Mechanism Based On Dependence Networks, ECAI'94, Amsterdam, August 1994.

    Google Scholar 

  28. E. S. K. Yu, J. Mylopoulos. An Actor dependency model of organizational work with application to business process reengineering. In Proceedings of COOCS'93-Conference On Organizational Computing Systems, Milpitas, USA, 1993

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Michael J. Wooldridge Nicholas R. Jennings

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Castelfranchi, C. (1995). Guarantees for autonomy in cognitive agent architecture. In: Wooldridge, M.J., Jennings, N.R. (eds) Intelligent Agents. ATAL 1994. Lecture Notes in Computer Science, vol 890. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-58855-8_3

Download citation

  • DOI: https://doi.org/10.1007/3-540-58855-8_3

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-58855-9

  • Online ISBN: 978-3-540-49129-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics