Skip to main content
Log in

On the calculation of the strength of threats

  • Regular Paper
  • Published:
Knowledge and Information Systems Aims and scope Submit manuscript

Abstract

Threats are used in persuasive negotiation dialogues when a proponent agent tries to persuade an opponent of him to accept a proposal. Depending on the information the proponent has modeled about his opponent(s), he may generate more than one threat, in which case he has to evaluate them in order to select the most adequate to be sent. One way to evaluate the generated threats is by calculating their strengths, i.e., the persuasive force of each threat. Related work considers mainly two criteria to do such evaluation: the certainty level of the beliefs that compose the threat and the importance of the goal of the opponent. This article aims to study the components of threats and propose further criteria that lead to improve their evaluation and to select more effective threats during the dialogue. Thus, the contribution of this paper is a model for calculating the strength of threats that is mainly based on the status of the goal of the opponent and the credibility of the proponent. The model is empirically evaluated and the results demonstrate that the proposed model is more efficient than previous works in terms of the number of exchanged arguments, and the number of reached agreements.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. When an agent uses rhetorical arguments to back their proposals, the negotiation is called persuasive negotiation [38].

  2. The BBGP model can be considered an extension of the BDI (beliefs-desires-intentions) model [13, 39]. For more details about BBGP model the reader is referred to [14].

  3. A computational formalization of the BBGP model can be found in [30].

  4. In [15], trust is defined as the belief that an agent has that the other party will do what he says it will. The reader is referred to [8, 37] for surveys about the state of the art of trust in multi-agent systems.

  5. Minimal means that there is no \(\mathcal {S}' \subset \mathcal {S}\) such that \(\mathcal {S}\vdash h\) and consistent means that it is not the case that \(\mathcal {S}\vdash h\) and \(\mathcal {S}\vdash \lnot h\), for any h [25].

  6. Variable number of exchanged threats determines the number of cycles to reach agreements since the former is half of the latter.

  7. Recall that the reputation is an evidence of the proponent’s past behavior of an agent with respect to his opponents. We assume that this value is already estimated and it is not private information; thus, the reputation value of an agent is visible for any other agent. On the other hand, the “accurate” value of the credibility of an agent P with respect to an opponent O—whose threshold is \({\mathtt {THRES}}(O)\)—is given by \({\mathtt {ACCUR\_CRED}}(P,O) = {\mathtt {REP}}(P) - {\mathtt {THRES}}(O)\).

  8. According to Meyer [29], an emotional agent is an artificial system that is designed in such a manner that emotions play a role. Thus, the emotional state the agent may determine his actions or part of his actions.

  9. In literature, there are two approaches about the order of the presentation of the persuasive message: (i) the anti-climax approach claims that it is better to present the most important part of the argument first in order, and (ii) the climax approach claims that the most crucial or important evidence have to be kept until the end of the message. According to O’Keefe [33], it is more beneficial if the arguments are arranged based on the climax approach.

  10. A social environment is a communication environment in which agents interact in a coordinated manner [32].

References

  1. Abdul-Rahman A, Hailes S (2000) Supporting trust in virtual communities. In: Proceedings of the 33rd annual Hawaii international conference on system sciences. IEEE, pp 9–18

  2. Amgoud L (2003) A formal framework for handling conflicting desires. In: Nielsen TD, Zhang NL (eds) Symbolic and quantitative approaches to reasoning with uncertainty. Springer, Berlin, pp 552–563

  3. Amgoud L, Parsons S, Maudet N (2000) Arguments, dialogue, and negotiation. In: Horn W (ed) Proceedings of the 14th European Conference on Artificial Intelligence (ECAI’00). IOS Press, Amsterdam, The Netherlands, pp 338–342

  4. Amgoud L, Prade H (2004) Threat, reward and explanatory arguments: generation and evaluation. In: Proceedings of the ECAI workshop on computational models of natural argument, pp 73–76

  5. Amgoud L, Prade H (2005) Formal handling of threats and rewards in a negotiation dialogue. In: Proceedings of the 4th international joint conference on autonomous agents and multiagent systems (AAMAS). ACM, pp 529–536

  6. Amgoud L, Prade H (2005) Handling threats, rewards, and explanatory arguments in a unified setting. Int J Intell Syst 20(12):1195–1218

    Article  Google Scholar 

  7. Amgoud L, Prade H (2006) Formal handling of threats and rewards in a negotiation dialogue. In: Parsons S, Maudet N, Moraitis P, Rahwan I (eds) Argumentation in multi-agent systems. Springer, Berlin, pp 88–103

    Chapter  Google Scholar 

  8. Artz D, Gil Y (2007) A survey of trust in computer science and the semantic web. Web Semant Sci Serv Agents World W Web 5(2):58–71

    Article  Google Scholar 

  9. Atkinson K, Bench-Capon T, McBurney P (2005) Persuasive political argument. In: Proceedings of the fifth international workshop on computational models of natural argument (CMNA), pp 44–51

  10. Atkinson K, Bench-Capon TJ, McBurney P (2005) Multi-agent argumentation for edemocracy. In: Proceedings of the 3rd European conference on multi-agent systems (EUMAS), pp 35–46

  11. Baarslag T, Hendrikx MJ, Hindriks KV, Jonker CM (2016) Learning about the opponent in automated bilateral negotiation: a comprehensive survey of opponent modeling techniques. Auton Agents Multi-Agent Syst 30(5):849–898

    Article  Google Scholar 

  12. Braet AC (1992) Ethos, pathos and logos in Aristotle’s Rhetoric: a re-examination. Argumentation 6(3):307–320

    Article  Google Scholar 

  13. Bratman M (1987) Intention, plans, and practical reason. Cambridge University Press, Cambridge

    Google Scholar 

  14. Castelfranchi C, Paglieri F (2007) The role of beliefs in goal dynamics: Prolegomena to a constructive theory of intentions. Synthese 155(2):237–263

    Article  MathSciNet  Google Scholar 

  15. Dasgupta P (2000) Trust as a commodity. Trust Mak Break Coop Relat 4:49–72

    Google Scholar 

  16. Demirdöğen ÜD (2010) The roots of research in (political) persuasion: ethos, pathos, logos and the Yale studies of persuasive communications. Int J Soc Inq 3(1):189–201

    Google Scholar 

  17. Dimopoulos Y, Moraitis P (2014) Advances in argumentation based negotiation. In: Negotiation and argumentation in multi-agent systems: fundamentals, theories, systems and applications, vol 44, pp 82–125

  18. Dong-Huynha T, Jennings N, Shadbolt N (2004) Fire: an integrated trust and reputation model for open multi-agent systems. In: ECAI 2004: 16th European conference on artificial intelligence, August 22–27, 2004, Valencia, Spain: including prestigious applicants [sic] of intelligent systems (PAIS 2004): proceedings, vol 110, p 18

  19. Fogg BJ (1998) Persuasive computers: perspectives and research directions. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM Press, pp 225–232

  20. Guerini M, Castelfranchi C (2006) Promises and threats in persuasion. In: CMNA VI-computational models of natural argument, pp 14–21

  21. Hadjinikolis C, Modgil S, Black E (2015) Building support-based opponent models in persuasion dialogues. In: International workshop on theorie and applications of formal argumentation. Springer, pp 128–145

  22. Hadjinikolis C, Siantos Y, Modgil S, Black E, McBurney P (2013) Opponent modelling in persuasion dialogues. In: Proceedings of the 23rd international joint conference on artificial intelligence (IJCAI), pp 164–170

  23. Higgins C, Walker R (2012) Ethos, logos, pathos: strategies of persuasion in social/environmental reports. Account Forum 36:194–208

    Article  Google Scholar 

  24. Hovland CI (1957) The order of presentation in persuasion. Yale University Press, New Haven

    Google Scholar 

  25. Hunter A (2010) Base logics in argumentation. In: Baroni P, Cerutti F, Giacomin M, Simari GR (eds) Proceedings of the 2010 conference on Computational Models of Argument: Proceedings of COMMA 2010. IOS Press, Amsterdam, The Netherlands, pp 275–286

  26. Hunter A (2015) Modelling the persuadee in asymmetric argumentation dialogues for persuasion. In: Proceedings of the 24th international joint conference on artificial intelligence, pp 3055–3061

  27. Hunter A (2018) Invited Talk: Computational Persuasion with Applications in Behaviour Change. In: Arai S, Kojima K, Mineshima K, Bekki D, Satoh K, Ohta Y (eds) New Frontiers in Artificial Intelligence. JSAI-isAI 2017. Lecture Notes in Computer Science, vol 10838. Springer, Cham

  28. Medić A (2012) Survey of computer trust and reputation models-the literature overview. Int J Inf Commun Technol Res 2(3):254–275

    Google Scholar 

  29. Meyer JJC (2006) Reasoning about emotional agents. Int J Intell Syst 21(6):601–619

    Article  Google Scholar 

  30. Morveli-Espinoza M, Possebom A, Puyol-Gruart J, Tacla CA (2019) Argumentation-based intention formation process. DYNA 86(208):82–91

    Article  Google Scholar 

  31. Morveli-Espinoza M, Possebom AT, Tacla CA (2016) Construction and strength calculation of threats. In: Computational models of argument—proceedings of COMMA 2016, Potsdam, Germany, 12–16 September, 2016, pp 403–410

  32. Odell JJ, Parunak HVD, Fleischer M, Brueckner S (2002) Modeling agents and their environment. In: International workshop on agent-oriented software engineering. Springer, pp 16–31

  33. O’keefe DJ (2016) Persuasion: theory and research, 3rd edn. SAGE Publications, Inc., Thousand Oaks, CA

  34. Pinyol I, Sabater-Mir J (2013) Computational trust and reputation models for open multi-agent systems: a review. Artif Intell Rev 40(1):1–25

    Article  Google Scholar 

  35. Poggi I (2005) The goals of persuasion. Pragmat Cogn 13:297–336

    Article  Google Scholar 

  36. Rahwan I, Ramchurn SD, Jennings NR, Mcburney P, Parsons S, Sonenberg L (2003) Argumentation-based negotiation. Knowl Eng Rev 18(04):343–375

    Article  Google Scholar 

  37. Ramchurn SD, Huynh D, Jennings NR (2004) Trust in multi-agent systems. Knowl Eng Rev 19(1):1–25

    Article  Google Scholar 

  38. Ramchurn SD, Jennings NR, Sierra C (2003) Persuasive negotiation for autonomous agents: a rhetorical approach. In: IJCAI workshop on computational models of natural argument, pp 9–17

  39. Rao AS, Georgeff MP et al (1995) BDI agents: from theory to practice. ICMAS 95:312–319

    Google Scholar 

  40. Rienstra T, Thimm M, Oren N (2013) Opponent models with uncertainty for strategic argumentation. In: Proceedings of the 23rd international joint conference on artificial intelligence (IJCAI), pp 332–338

  41. Sabater J, Sierra C (2001) Regret: a reputation model for gregarious societies. In: Proceedings of the 4th workshop on deception fraud and trust in agent societies, vol 70. pp 61–69

  42. Sierra C, Jennings NR, Noriega P, Parsons S (1998) A framework for argumentation-based negotiation. In: Intelligent agents IV agent theories, architectures, and languages. Springer, pp 177–192

  43. Sycara KP (1990) Persuasive argumentation in negotiation. Theory Decis 28(3):203–242

    Article  Google Scholar 

  44. Verheij B (1999) Automated argument assistance for lawyers. In: Proceedings of the 7th international conference on artificial intelligence and law. ACM, pp 43–52

  45. Verheij B (2003) Artificial argument assistants for defeasible argumentation. Artif Intell 150(1–2):291–324

    Article  Google Scholar 

  46. Walton D (2005) Fundamentals of critical argumentation. Cambridge University Press, Cambridge

    Book  Google Scholar 

  47. Yu B, Singh MP (2000) A social mechanism of reputation management in electronic communities. In: International workshop on cooperative information agents. Springer, pp 154–165

Download references

Acknowledgements

Mariela Morveli Espinoza is founded by CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior). The authors are very grateful to Prof. Juan Carlos Nieves for making a number of important suggestions for improving the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mariela Morveli Espinoza.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This is an extended version of a short-paper originally presented at the Computational Models of Argument Conference, COMMA’16 [31].

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Morveli Espinoza, M., Possebom, A.T. & Tacla, C.A. On the calculation of the strength of threats. Knowl Inf Syst 62, 1511–1538 (2020). https://doi.org/10.1007/s10115-019-01399-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10115-019-01399-2

Keywords

Navigation