ABSTRACT
We have implemented a test-bed with agents who collaborate and communicate, potentially with deceptive information. Agents range from benevolent to selfish. They also have differing trustworthiness levels. Deception handling can vary from agents with mood swings about being deceived, to agents who react minimally to being deceived. We show the effects of different trustworthiness values in different scenarios on group performance. Our agents are designed in BDI paradigm and implement a possible-world semantics.
- Carofiglio V., and de Rosis, F. Ascribing and Weighting Beliefs in Deceptive Information Exchanges, M. Bauer, P.J. Gmytrasiewicz, J.Vassileva (Eds.), User Modeling 2001, LNAI 2109, (2001), 222--224, Springer.]] Google ScholarDigital Library
- Wooldridge, M. Reasoning about Rational Agents, The MIT Press, 2000.]]Google Scholar
Index Terms
- Towards deception in agents
Recommendations
The Role of Trust and Deception in Virtual Societies
In hybrid situations where artificial agents and human agents interact, the artificial agents must be able to reason about the trustworthiness and deceptive actions of their human counterpart. Thus a theory of trust and deception is needed that will ...
On Detecting Deception in Agent Societies
IAT '05: Proceedings of the IEEE/WIC/ACM International Conference on Intelligent Agent TechnologyDriven by the advances in intelligent agent technologies, agent societies have taken on prominent roles in various sciences and applications. Research on risks posed to such societies such as deception by malicious agents has recently begun gaining ...
Trust in an AI versus a Human teammate: The effects of teammate identity and performance on Human-AI cooperation
AbstractRecent advances in artificial intelligence (AI) enable researchers to create more powerful AI agents that are becoming competent teammates for humans. However, human distrust of AI is a critical factor that may impede human-AI ...
Highlights- Humans accept their AI teammate's decision less often when they are deceived about the identity of the AI as another human.
Comments