Skip to main content

Adaptivity in Agent-Based Systems via Interplay between Action Selection and Norm Selection

  • Conference paper
  • First Online:
  • 316 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2614))

Abstract

Beyond the ability for adaptation to an environment, Self-Adaptive Software (SAS) embodies the capacity and the initiative to adapt its own behavior. The adaptation is due to a need to maintain a desired or a reference relationship between a set of input and output signals. We may loosely divide SAS into adaptor and adapted components. The adapted component has an ongoing relationship with the environment. The adaptor detects and evaluates the need for change in the operation of the adapted component. In anthropomorphic terms, this detection and evaluation involves the cognitive processes of introspection and assimilation. However, an artifact may only need a supervisory control module. In a hierarchical system, the idea of adapted and adaptor can be extended to several levels with the higher level adapting the function of lower level. Hierarchical architectural systems are well studied. However, self-adaptive software is more than hierarchical control or application of adaptive techniques such as neural nets or genetic approaches.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. E. Alonso, 1998. How Individuals Negotiate Societies, Proc. of the Third International Conference on Multi-Agent Systems (ICMAS-98), pp. 18–25. July 3-8, Paris. IEEE Computer Society Press.

    Google Scholar 

  2. G. Beavers and H. Hexmoor, Teams of Agents, In the Proceedings of IEEE SMC, Arizona.

    Google Scholar 

  3. A. Birk, 2000. Boosting Cooperation by Evolving Trust, Applied Artificial Intelligence Journal, Taylor and Francis.

    Google Scholar 

  4. S. Brainov, 1996. Altruistic Cooperation between Self-Interested Agents. In Proceedings of European Conference on Artificial Intelligence, 519–523, John Wiley & Sons.

    Google Scholar 

  5. S. Brainov, 2000. The role and the Impact of Preferences on Multiagent Interaction. In Intelligent Agents VI, Lesperance and Jennings (eds.), 349–363, Springer-Verlag.

    Google Scholar 

  6. S. Brainov and T. Sandholm, 1999. Power, Dependence and Stability in Multiagent Plans. In Proceedings of AAAI’99, pages 11–16, Orlando, Florida.

    Google Scholar 

  7. C. Castelfranchi, 1990. Social Power. In Demazeau Y. and Muller J.-P. eds. Decentralized AI-Proceedings of the First European Workshop on Modeling Autonomous Agents in a Multi-Agent World. 49–62. Elsevier Science Publishers.

    Google Scholar 

  8. C. Castelfranchi., and R. Falcone, 2000. Trust and Control: A Dialectic Link

    Google Scholar 

  9. C. Castelfranchi, F. Dignum, C. Konker, and J. Treur, 1999. Deliberative Normative Agents: Principles and Architectures, pp. 364–378, ATAL-99.

    Google Scholar 

  10. W. Clancey, 1997. Situated Cognition: On Human Knowledge and Computer Representations, Cambridge University Press.

    Google Scholar 

  11. R. Conte, R. Falcone, and G. Sartor, 1999. Introduction: Agents and Norms: How to fill the gap? AI&Law Special Issue on Agents and Norms. R. Conte, R. Falcone, and G. Sartor (Eds), vol 7, no 1, p 1–15.

    Google Scholar 

  12. R. Cooper, T. Shallice, and J. Farringdon, 1995. Symbolic and continuous processes in the automatic selection of actions. In Hallam, J., (ed.) Hybrid Problems, Hybrid Solutions. IOS Press. Amsterdam. pp. 27–37.

    Google Scholar 

  13. M. Greaves, H. Holmback, and J. Bradshaw, (1999). What is Conversation Policy? In Proceedings of the Autonomous Agents’ 99, Workshop on Specifying and Implementing Conversation Policies, Seattle, WA, May 1999.

    Google Scholar 

  14. D. Fitoussi and M. Tennenholtz, 2000. Choosing social laws for multiagent systems: minimality and simplicity. Artificial Intelligence, 119(2000) 61–101.

    Article  MATH  MathSciNet  Google Scholar 

  15. H. Hexmoor and H. Duchscherer, 2000. Shared Autonomy and Teaming: A preliminary report. In Proceedings of Workshop on Performance Metrics for Intelligent Systems, NIST, Washington, DC.

    Google Scholar 

  16. H. Hexmoor and H. Duchscherer, 2001. Efficiency as a Motivation to Team. In Proceedings of FLAIRS 2001, FL.

    Google Scholar 

  17. H. Hexmoor and G. Beavers, 2001. Towards Teams of Agents, In Proceedings of the International Conference in Artificial Intelligence, H. R. Arabnia, (ed), (IC-AI’2001), Las Vegas, CSREA Press.

    Google Scholar 

  18. P. Maes. 1989. How to Do the Right Thing. Connection Science Journal, Vol. 1, No. 3, pp. 291–323.

    Google Scholar 

  19. T. Norman, and T. Shallice, 1986. Attention to action: Willed and automatic control of behavior. In Davidson, R., Schwartz, G., and Shapiro, D., (eds.) Consciousness and Self Regulation: Advances in Research and Theory, Volume 4. Plenum, New York, NY. pp. 1–18.

    Google Scholar 

  20. Y. Shoham and M. Tennenholtz, 1995. Social laws for artificial agent societies: Off-line design. Artificial Intelligence 73(1995) 231–252.

    Article  Google Scholar 

  21. P. Thagard, 2000. Coherence in Thought and Action(Life and Mind: Philosophical Issues in Biology and Psychology), MIT Press.

    Google Scholar 

  22. D. Walton, 1989. Informal Logic: A Handbook for Critical Argumentation, Cambridge, Cambridge University Press.

    Google Scholar 

  23. R. Werner, 1989. Cooperative Agents: A unified Theory of Communication and Social Structures. In L. Gasser, M. Huhns, editors, Distributed Artificial Intelligence, Volume II.

    Google Scholar 

  24. M. Wooldridge, 2000. Reasoning about Rational Agents. The MIT Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hexmoor, H. (2003). Adaptivity in Agent-Based Systems via Interplay between Action Selection and Norm Selection. In: Laddaga, R., Shrobe, H., Robertson, P. (eds) Self-Adaptive Software: Applications. IWSAS 2001. Lecture Notes in Computer Science, vol 2614. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-36554-0_16

Download citation

  • DOI: https://doi.org/10.1007/3-540-36554-0_16

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-00731-9

  • Online ISBN: 978-3-540-36554-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics