Abstract
Beyond the ability for adaptation to an environment, Self-Adaptive Software (SAS) embodies the capacity and the initiative to adapt its own behavior. The adaptation is due to a need to maintain a desired or a reference relationship between a set of input and output signals. We may loosely divide SAS into adaptor and adapted components. The adapted component has an ongoing relationship with the environment. The adaptor detects and evaluates the need for change in the operation of the adapted component. In anthropomorphic terms, this detection and evaluation involves the cognitive processes of introspection and assimilation. However, an artifact may only need a supervisory control module. In a hierarchical system, the idea of adapted and adaptor can be extended to several levels with the higher level adapting the function of lower level. Hierarchical architectural systems are well studied. However, self-adaptive software is more than hierarchical control or application of adaptive techniques such as neural nets or genetic approaches.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
E. Alonso, 1998. How Individuals Negotiate Societies, Proc. of the Third International Conference on Multi-Agent Systems (ICMAS-98), pp. 18–25. July 3-8, Paris. IEEE Computer Society Press.
G. Beavers and H. Hexmoor, Teams of Agents, In the Proceedings of IEEE SMC, Arizona.
A. Birk, 2000. Boosting Cooperation by Evolving Trust, Applied Artificial Intelligence Journal, Taylor and Francis.
S. Brainov, 1996. Altruistic Cooperation between Self-Interested Agents. In Proceedings of European Conference on Artificial Intelligence, 519–523, John Wiley & Sons.
S. Brainov, 2000. The role and the Impact of Preferences on Multiagent Interaction. In Intelligent Agents VI, Lesperance and Jennings (eds.), 349–363, Springer-Verlag.
S. Brainov and T. Sandholm, 1999. Power, Dependence and Stability in Multiagent Plans. In Proceedings of AAAI’99, pages 11–16, Orlando, Florida.
C. Castelfranchi, 1990. Social Power. In Demazeau Y. and Muller J.-P. eds. Decentralized AI-Proceedings of the First European Workshop on Modeling Autonomous Agents in a Multi-Agent World. 49–62. Elsevier Science Publishers.
C. Castelfranchi., and R. Falcone, 2000. Trust and Control: A Dialectic Link
C. Castelfranchi, F. Dignum, C. Konker, and J. Treur, 1999. Deliberative Normative Agents: Principles and Architectures, pp. 364–378, ATAL-99.
W. Clancey, 1997. Situated Cognition: On Human Knowledge and Computer Representations, Cambridge University Press.
R. Conte, R. Falcone, and G. Sartor, 1999. Introduction: Agents and Norms: How to fill the gap? AI&Law Special Issue on Agents and Norms. R. Conte, R. Falcone, and G. Sartor (Eds), vol 7, no 1, p 1–15.
R. Cooper, T. Shallice, and J. Farringdon, 1995. Symbolic and continuous processes in the automatic selection of actions. In Hallam, J., (ed.) Hybrid Problems, Hybrid Solutions. IOS Press. Amsterdam. pp. 27–37.
M. Greaves, H. Holmback, and J. Bradshaw, (1999). What is Conversation Policy? In Proceedings of the Autonomous Agents’ 99, Workshop on Specifying and Implementing Conversation Policies, Seattle, WA, May 1999.
D. Fitoussi and M. Tennenholtz, 2000. Choosing social laws for multiagent systems: minimality and simplicity. Artificial Intelligence, 119(2000) 61–101.
H. Hexmoor and H. Duchscherer, 2000. Shared Autonomy and Teaming: A preliminary report. In Proceedings of Workshop on Performance Metrics for Intelligent Systems, NIST, Washington, DC.
H. Hexmoor and H. Duchscherer, 2001. Efficiency as a Motivation to Team. In Proceedings of FLAIRS 2001, FL.
H. Hexmoor and G. Beavers, 2001. Towards Teams of Agents, In Proceedings of the International Conference in Artificial Intelligence, H. R. Arabnia, (ed), (IC-AI’2001), Las Vegas, CSREA Press.
P. Maes. 1989. How to Do the Right Thing. Connection Science Journal, Vol. 1, No. 3, pp. 291–323.
T. Norman, and T. Shallice, 1986. Attention to action: Willed and automatic control of behavior. In Davidson, R., Schwartz, G., and Shapiro, D., (eds.) Consciousness and Self Regulation: Advances in Research and Theory, Volume 4. Plenum, New York, NY. pp. 1–18.
Y. Shoham and M. Tennenholtz, 1995. Social laws for artificial agent societies: Off-line design. Artificial Intelligence 73(1995) 231–252.
P. Thagard, 2000. Coherence in Thought and Action(Life and Mind: Philosophical Issues in Biology and Psychology), MIT Press.
D. Walton, 1989. Informal Logic: A Handbook for Critical Argumentation, Cambridge, Cambridge University Press.
R. Werner, 1989. Cooperative Agents: A unified Theory of Communication and Social Structures. In L. Gasser, M. Huhns, editors, Distributed Artificial Intelligence, Volume II.
M. Wooldridge, 2000. Reasoning about Rational Agents. The MIT Press.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Hexmoor, H. (2003). Adaptivity in Agent-Based Systems via Interplay between Action Selection and Norm Selection. In: Laddaga, R., Shrobe, H., Robertson, P. (eds) Self-Adaptive Software: Applications. IWSAS 2001. Lecture Notes in Computer Science, vol 2614. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-36554-0_16
Download citation
DOI: https://doi.org/10.1007/3-540-36554-0_16
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-00731-9
Online ISBN: 978-3-540-36554-9
eBook Packages: Springer Book Archive