Authors:
Kazuyuki Kokusho
and
Kazuko Takahashi
Affiliation:
Kwansei Gakuin University, Japan
Keyword(s):
Argumentation, Strategy, Persuasion, Dishonesty, Opponent Model.
Related
Ontology
Subjects/Areas/Topics:
Agent Communication Languages
;
Agents
;
Artificial Intelligence
;
Artificial Intelligence and Decision Support Systems
;
Distributed and Mobile Software Systems
;
Enterprise Information Systems
;
Knowledge Engineering and Ontology Development
;
Knowledge-Based Systems
;
Multi-Agent Systems
;
Negotiation and Interaction Protocols
;
Software Engineering
;
Symbolic Systems
Abstract:
This paper discusses persuasive dialogue in a case where dishonesty is permitted. We have previously proposed
a dialogue model based on a predicted opponent model using an abstract argumentation framework, and
discussed the conditions under which a dishonest argument could be accepted without being detected. However,
it is hard to estimate the outcome of a dialogue, or identify causality between agents’ knowledge and
the result. In this paper, we implement our dialogue model and execute argumentations between agents under
different conditions. We analyze the results of these experiments and discuss about them. In brief, our results
show that the use of dishonest arguments affects the likelihood of successfully persuading the opponent, or
winning a debate game, but we could not identify a relationship between the results of a dialogue and the
initial argumentation frameworks of the agents.