Skip to main content

Deception/Honesty Detection and (Mis)trust Building in Manipulable Multi-Agent Argumentation: An Insight

  • Conference paper
  • First Online:
Book cover PRIMA 2019: Principles and Practice of Multi-Agent Systems (PRIMA 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11873))

Abstract

In manipulable multi-agent argumentation, each agent may transmit deceptive information to others for tactical motives. We contemplate epistemic states and their roles in deception/honesty detection and (mis)trust-building. We propose the use of intra-agent preferences for handling deception/honesty detection and inter-agent preferences for determining which agent(s) to believe in more. We illustrate how deception/honesty in an argumentation of an agent, if detected, may alter the agent’s perceived trustworthiness, and how that may affect agents’ judgement as to which arguments they should accept. A detailed comparison to an earlier study on deception detection highlights wider applicability of our approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    \(\textsf {and}\)” instead of “and” is used when the context in which the word appears strongly indicates classic-logic truth-value comparisons. Similarly for \(\textsf {or}\) (disjunction) and \(\textsf {not}\) (negation).

  2. 2.

    \(e_1\) cannot be certain \(a_5\) is factual to \(e_2\), since, firstly, there may or may not be Detective in a game, and, secondly, it could be Civilian who is bluffing to be Detective.

  3. 3.

    Recall \(e_2\) knows \(e_1\) knows \(a_1\); as such, \(a_1\) appears in \(e_2\)’s model of \(e_1\)’s preference-adjusted local agent argumentation. Recall also it is a common knowledge that Killer does not know whether there be Detective; as such, from \(e_2\)’s perspective, neither \(a_4\) nor \(a_5\) is known by \(e_1\) to be factual to \(e_2\).

  4. 4.

    Since every public argumentation is known to every agent, the converse is not possible.

References

  1. Amgoud, L., Vesic, S.: Rich preference-based argumentation frameworks. Int. J. Approximate Reasoning 2, 586–606 (2014)

    MathSciNet  MATH  Google Scholar 

  2. Arisaka, R., Hagiwara, M., Ito, T.: Formulating manipulable argumentation with intra-/inter-agent preferences. CoRR, abs/1909.03616 (2019)

    Google Scholar 

  3. Arisaka, R., Satoh, K., van der Torre, L.: Anything you say may be used against you in a court of law. In: Pagallo, U., Palmirani, M., Casanovas, P., Sartor, G., Villata, S. (eds.) AICOL 2015-2017. LNCS (LNAI), vol. 10791, pp. 427–442. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00178-0_29

    Chapter  Google Scholar 

  4. Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming, and n-person games. Artif. Intell. 77(2), 321–357 (1995)

    Article  MathSciNet  Google Scholar 

  5. Kontarinis, D., Toni, F.: Identifying malicious behavior in multi-party bipolar argumentation debates. In: Rovatsos, M., Vouros, G., Julian, V. (eds.) EUMAS/AT -2015. LNCS (LNAI), vol. 9571, pp. 267–278. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-33509-4_21

    Chapter  Google Scholar 

  6. Kuipers, A., Denzinger, J.: Pitfalls in practical open multi agent argumentation systems: malicious argumentation. In: COMMA, pp. 323–334 (2010)

    Google Scholar 

  7. Rahwan, I., Larson, K.: Mechanism design for abstract argumentation. In: AAMAS, pp. 1031–1039 (2008)

    Google Scholar 

  8. Sakama, C.: Dishonest arguments in debate games. In: COMMA, pp. 177–184 (2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryuta Arisaka .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Arisaka, R., Hagiwara, M., Ito, T. (2019). Deception/Honesty Detection and (Mis)trust Building in Manipulable Multi-Agent Argumentation: An Insight. In: Baldoni, M., Dastani, M., Liao, B., Sakurai, Y., Zalila Wenkstern, R. (eds) PRIMA 2019: Principles and Practice of Multi-Agent Systems. PRIMA 2019. Lecture Notes in Computer Science(), vol 11873. Springer, Cham. https://doi.org/10.1007/978-3-030-33792-6_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-33792-6_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-33791-9

  • Online ISBN: 978-3-030-33792-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics