Model checking of robustness properties in trust and reputation systems

https://doi.org/10.1016/j.future.2020.02.070Get rights and content

Highlights

  • A model checking framework for verifying robustness of Trust and Reputation Systems.

  • A Probabilistic Action and Reward based Computational Tree Logic and the algorithms.

  • The proposed framework has been implemented as a tool named TRIM-Checker.

  • Security of some Trust and Reputation Systems are verified and compared together.

Abstract

Trust and reputation systems (TRSs) are used in cooperative environments where an agent needs to make a decision for requesting or performing a service. However, TRSs can be abused by malicious agents who do sequences of dishonest actions (attacks). Although there are proposals on verification of TRSs against attacks, they are not comprehensive enough to evaluate various Trust Computation Models (TCMs) and/or do not provide sufficient expressive power to specify different required robustness properties. In this paper, we introduce a comprehensive framework for specifying and verifying various robustness properties in TRSs through model checking approach. The proposed framework includes three main parts: (1) a logic for specification of robustness properties of TRSs named Probabilistic Action and Reward based Computation Tree Logic (PARCTL), (2) an enhanced version of our previously proposed model for specifying TRSs in hostile environments named Trust and Reputation Interaction Model (TRIM), and (3) the required algorithms for quantitative and probabilistic model checking of PARCTL properties over the specified model. The proposed framework has been implemented as a tool named TRIM-Checker. Our experimental results on robustness analysis of famous eBay, Beta, and CORE TCMs using TRIM-Checker are presented and their robustness against attacks is evaluated and compared together.

Introduction

Trust and reputation systems (TRSs) are security mechanisms for giving incentives or applying sanctions to force community members to act based on the norms of the community [1]. In fact, TRSs evaluate trust and reputation based on ratings of the interactions recorded by agents (rate history) and opinions of other agents about members of the community (recommendations). More exactly, these systems do some calculations on the rate history and/or recommendations to find a score that can be used as a decision-making criterion. TRSs are considered to be a part of next-generation security mechanisms named soft security [1], [2], [3] which are based on social control norms rather than traditional security mechanisms (hard security) such as authentication and access control. TRSs can be used in many applications such as online market places [4], [5], [6], online service provisions [7], mobile ad-hoc networks [8], [9], [10], web service selection [11], and social networks [12]. In these systems normally an agent of the community needs to make a decision for requesting or performing a service using TRSs.

Although TRSs can improve the quality of decisions and hence the overall benefit of the agents, they may lead to the reverse outcome in the presence of malicious agents (attackers) who do misleading behaviors. More precisely, attackers can conduct trust attacks (i.e. some sequences of actions based on an attack plan) to achieve their misleading goals such as “increasing probability of selecting a malicious service provider”. Investigating the literature, it is revealed that there exist different trust attacks such as On–off, Re-entry, Slandering, Promoting, and Collusion [13], [14], [15]. Also, it has been shown that TRSs have different degrees of robustness against trust attacks [14], [16], [17]. Therefore TRSs should be designed robust enough to resist against malicious attackers. This is while most of the proposed TRSs are not comprehensively evaluated to check their robustness against trust attacks.

In practice, most of the proposed TRSs either left unevaluated or are evaluated using simulations and case study analysis [14], [16]. However, we know that simulation and case study analysis are not exact and comprehensive enough to check the robustness of TRSs against malicious attackers. Indeed verification methods are better candidates in this regard. Using simulation methods for assessment of TRSs does not encompass all state space of the system, while verification methods can do that. There are a small number of proposals for evaluating TRSs against trust attacks based on verification methods including [18], [19], [20], [21], [22], [23], [24], [25], [26], [27]. However, most of the proposed verification methods for TRSs suffer from inexpressive formalism and limited applicability or lack the ability to define different robustness criteria. In fact, the proposals for verification of TRSs like [22] that are based on a qualitative approach, do not provide any comparative robustness criterion at all. On the other hand, the robustness criteria offered by quantitative verification approaches such as [18], [19], [20], [21], [23], [24], [25], [26], [27] have some shortcomings because they are normally defined in terms of some vague parameters of the system.

Reviewing the works done on the evaluation of TRSs through formal verification or simulation, it is revealed that although formal verification is a more accurate and more complete assessment method than simulation, the formal verification methods presented so far yet suffer from a number of shortcomings:

  • i.

    Lack of good infrastructure to specify a wide range of robustness properties based on attack goals,

  • ii.

    Using weak robustness measures which are incompatible with attacks,

  • iii.

    Absence of specification language and verification algorithms that are consistent with the nature of TRSs, and

  • iv.

    Lack of a user-friendly tool to model and verify TRSs.

Note that using an improper robustness criterion leads to inaccurate or incomplete TRS assessment. Indeed, the robustness criterion should be defined in such a way that it can show how much the employed TRS is suitable to prevent attackers from reaching their misleading goals.

In this paper, we present a framework for modeling and verification of TRSs against trust attacks using model checking approach. Model-checking is a formal verification approach that is used to check whether a model meets the given properties or not. The properties are normally specified through a well-formed logic. Then, the supplied algorithms check the satisfaction of the defined properties on the model. We consider TRS as part of an interacting agent system, so we adopted and extended our previously proposed Trust and Reputation Interaction Model (TRIM) [28] that comprises a comprehensive model for formalizing agents of the community and their trust-based interactions. We need also a well-formed logic to be able to coherently specify a wide range of robustness properties over TRIM. Because trust attacks are conducted to achieve a variety of goals, a language with sufficient expressive power for defining different robustness properties is required. Due to the nature of trust attack goals, using a language that supports probabilistic and reward-based primitives is required to specify the robustness properties. In addition, since trust attacks are sequences of system-allowed actions, in the definition of the robustness criterion, the system actions should also be addressed. For example, to show how much TRS is able to force attackers to perform max-quality services, the following robustness measure might be used: “What is the probability of providing a max-quality service by an attacker service provider in the 10th epoch of the system?” This, in fact, shows how much the TRS is robust against attackers who want to provide poor quality services. Therefore, we introduce Probabilistic Action and Reward based Computation Tree Logic (PARCTL) for the specification of the robustness properties. The proposed logic has the ability to specify a wide range of robustness properties which might be defined based on probability, reward, actions, or a combination of them. To the best of our knowledge, there is not any probabilistic, action and reward-based logic to specify robustness properties of TRSs in previous works. For model checking PARCTL properties over the specified model, the required on-the-fly probabilistic and quantitative model checking algorithms are also introduced. The on-the-fly model-checking algorithms help us to build a dynamic model of the TRIM during verifying the PARCTL properties. This partially simplifies the state-explosion problem caused by building the whole model before the model-checking phase.

The proposed framework has been implemented as a tool named TRIM-Checker [29]. The tool has the ability to model various environments, attacks, TRSs, and robustness properties through a user-friendly interface. TRIM-Checker provides a simple and comprehensive environment to define TRS verification projects.

To show the applicability and the results of the proposed framework and the tool, we conducted a number of experiments on three famous eBay, Beta and CORE reputation models as samples to verify and compare the robustness of these systems based on some specified robustness criteria. The evaluation results indicate that the robustness of different TRSs against each attack is varied based on the goals of the attack and the robustness criteria defined in accordance with the goals. For example, considering a so defined “Sanctioning robustness measure (SRM)”, we found that the eBay is the least (the most) robust TCM against collusion slandering (Re-entry) On-off dishonest service attack.

The paper is organized as follows: Section 2 reviews the related work on verification of TRSs. Section 3 shows how an adopted and extended version of TRIM can be used for modeling TRSs in hostile environments. Section 4 presents an updated view of the TRIM run-time behavior. Section 5 introduces both PARCTL and on-the-fly model checking algorithms for evaluating the satisfaction of quantitative and probabilistic formulas of PARCTL over the specified TRS model. Section 6 exhibits the experimental evaluation of the proposed framework using the TRIM-Checker tool, and finally, Section 7 concludes the paper.

Section snippets

Related work

Previous works on evaluating TRSs can be categorized into four approaches: Ad-hoc evaluation, simulation, mathematical analysis, and verification. The proposed framework in this paper fits into the last category. In this section, we first briefly review the four main approaches and discuss their pros and cons. Then focus on the verification approach and along with a thorough review of the main related works in this category, try to justify the position of the proposed framework in the

Modeling TRSs in hostile environments using TRIM

TRIM was proposed for modeling TRSs in hostile environments. In this section, we briefly review this model and show how it can be used for specifying the agents’ interactions along with the underlying TRS. Furthermore, we perform a number of adoptions and extensions on TRIM to make it suitable for model checking of robustness properties in TRSs. The extensions include a trust fitness function to evaluate trust fitness factors based on trust scores, defining fitness-based actions, an augmented

Runtime behavior of TRIM

In this section, we extend the runtime behavior of the TRIM as a non-homogeneous Markov Decision Process (MDP). For this purpose, first we define the MDP, and then the MDP Tree of the TRIM.

Probabilistic Action and Reward based Computation Tree Logic (PARCTL)

For evaluating the robustness of a given TRIM against trust attacks, we need a logic to specify the robustness properties. As we showed in the previous section, TRIM can be transformed into a Markov decision process with labeled states. In addition to Prob, HReward, and AReward, a label includes the enriched interaction result of the interaction leading to the state. The enriched interaction result consists of action methods. Consequently, we can specify both quantitative and qualitative

Implementation and experimental results

To model a specific TRS in an environment involving a certain combination of agents doing various trust attacks, we must specify the mentioned components in the form of TRIM configuration SC,TC,Π. SC specifies the configuration of system agents and their communication method, TC is used to configure the examined TRS, and finally, the actions set Π include all available actions of the system. Based on the nature of the modeled environment and the possibility of various trust attacks, the

Conclusions

In this paper, a new probability, reward and action-based logic (PARCTL), as well as its model checking algorithms for formal verification of robustness properties of TRSs are presented. The proposed logic helps us to define a wide range of robustness measures based on different goals of attacks. To express the syntax and semantics of the proposed logic, we adopted and extended the TRIM framework which is a framework for modeling trust and reputation systems in hostile environments. We extended

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Seyed Asgary Ghasempouri received the BS and MS degrees in Computer Engineering from Iran University of Science and Technology and Sharif University of Technology, Tehran Iran, in 2005 and 2007 respectively. He joined the Faculty of Azad University Qaemshahr Branch in 2007 as an instructor of Computer Engineering. He received his Ph.D. in Computer Engineering from University of Isfahan, Isfahan, Iran in 2019. His research interests include Formal Specification and Verification, Model-Checking,

References (80)

  • ZhengR. et al.

    A collaborative analysis method of user abnormal behavior based on reputation voting in cloud environment

    Future Gener. Comput. Syst.

    (2018)
  • JøsangA.

    Trust and reputation systems

  • Fernádez-GagoM.C. et al.

    A survey on the applicability of trust management systems for wireless sensor networks

  • YuH. et al.

    A survey of trust and reputation management systems in wireless communications

    Proc. IEEE

    (2010)
  • LucaM.

    Designing online marketplaces: Trust and reputation mechanisms

    Innov. Policy Econ.

    (2017)
  • RiazatiM. et al.

    An incentive mechanism to promote honesty among seller agents in electronic marketplaces

    Electron. Commer. Res.

    (2019)
  • TadelisS.

    The economics of reputation and feedback systems in e-commerce marketplaces

    IEEE Internet Comput.

    (2016)
  • ChoJ.-H. et al.

    A survey on trust management for mobile ad hoc networks

    IEEE Commun. Surv. Tutor.

    (2011)
  • HuY.C. et al.

    A survey of secure wireless ad hoc routing

    IEEE Secur. Priv.

    (2004)
  • ZhangJ.

    A survey on trust management for VANETs

  • WangY. et al.

    A review on trust and reputation for web service selection

  • MrmolF.G. et al.

    Reporting offensive content in social networks: Toward a reputation-based assessment approach

    IEEE Internet Comput.

    (2014)
  • HoffmanK. et al.

    A survey of attack and defense techniques for reputation systems

    ACM Comput. Surv.

    (2009)
  • A. Jøsang, J. Golbeck, Challenges for robust trust and reputation systems, in: Proc. 2009 5th Int. Work. Secur. Trust...
  • SunY. et al.

    Attacks on trust evaluation in distributed networks

  • JøsangA.

    Robustness of trust and reputation systems: Does it matter?

  • MullerT. et al.

    On robustness of trust systems

  • AldiniA.

    Formal approach to design and automatic verification of cooperation-based networks

    Int. J. Adv. Internet Technol.

    (2013)
  • AldiniA.

    Modeling and verification of trust and reputation systems

    Secur. Commun. Netw.

    (2015)
  • AldiniA.

    Design and verification of trusted collective adaptive systems

    ACM Trans. Model. Comput. Simul.

    (2018)
  • GhasempouriS.A. et al.

    A formal model for security analysis of trust and reputation systems

  • HerrmannP.
  • Jalaly BidgolyA. et al.

    Trust modeling and verification using colored Petri nets

  • Jalaly BidgolyA. et al.

    Quantitative verification of beta reputation system using PRISM probabilistic model checker

  • Jalaly BidgolyA. et al.

    Modelling and quantitative verification of reputation systems against malicious attackers

    Comput. J.

    (2015)
  • Jalaly BidgolyA. et al.

    Modeling and quantitative verification of trust systems against malicious attackers

    Comput. J.

    (2016)
  • GhasempouriS.A. et al.

    TRIM-checker

    Computational Trust Lab.

    (2019)
  • MrmolF.G. et al.

    Security threats scenarios in trust and reputation models for distributed systems

    Comput. Secur.

    (2009)
  • MrmolF.G. et al.

    Towards pre-standardization of trust and reputation models for distributed and heterogeneous systems

    Comput. Stand. Interfaces

    (2010)
  • NoorianZ. et al.

    The state of the art in trust and reputation systems: A framework for comparison

    J. Theor. Appl. Electron. Commer. Res.

    (2010)
  • Cited by (7)

    • FGFL: A blockchain-based fair incentive governor for Federated Learning

      2022, Journal of Parallel and Distributed Computing
      Citation Excerpt :

      Wang et al. [52] propose a reputation scheme to encourage devices to play useful roles in the industrial Internet of Things, which solves the security and efficiency problems. Ghasempouri et al. [12] use a Trust and Reputation Interaction Model (TRIM) method to avoid malicious agents who do sequences of evil actions in eBay, Beta, and CORE TCMs. Most of the above assessment methods directly verify the worker's status and original data.

    • Model checking agent-based communities against uncertain group commitments and knowledge

      2021, Expert Systems with Applications
      Citation Excerpt :

      As future work, we aim to extend the framework to include trust and reputation so that commitments, knowledge and trust can interact within a unified framework. We will capitalize on recent developments on probabilistic trust and reputation logics and their model checking (Drawel et al., 2020; Ghasempouri & Ladani, 2020). Applying the proposed technique and investigating statistical model checking for commitment, knowledge and trust to model and verify compositions and communities of cloud services (Elnaffar, Maamar, Yahyaoui, Bentahar, & and Thiran, 2008; Khosravifar, Bentahar, Moazin, & Thiran, 2010) following the methodology advocated in Bentahar, Yahyaoui, Kova, and Maamar (2013) are other plans for our future research.

    • Robustness verification of soft security systems

      2020, Journal of Information Security and Applications
      Citation Excerpt :

      We believe that a similar approach should be taken in soft security as well. However, the soft security verification methods are in the early stages and a few works are published in this context (e.g. [14,15,18–20]). Due to the importance of soft security systems and their increasing applications, a verification method for their robustness assessment is a requirement in this area [21,22].

    • FIFL: A Fair Incentive Mechanism for Federated Learning

      2021, ACM International Conference Proceeding Series
    View all citing articles on Scopus

    Seyed Asgary Ghasempouri received the BS and MS degrees in Computer Engineering from Iran University of Science and Technology and Sharif University of Technology, Tehran Iran, in 2005 and 2007 respectively. He joined the Faculty of Azad University Qaemshahr Branch in 2007 as an instructor of Computer Engineering. He received his Ph.D. in Computer Engineering from University of Isfahan, Isfahan, Iran in 2019. His research interests include Formal Specification and Verification, Model-Checking, Computational Trust, Compiler Theory, Evolutionary Computing, and Data Mining.

    Behrouz Tork Ladani received his B.Sc. in Software Engineering from University of Isfahan, Isfahan, Iran in 1996, and M.Sc. in Software Engineering from Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran in 1998. He received his Ph.D. in Computer Engineering from Tarbiat Modarres University, Tehran, Iran in 2005. He is currently associate professor and Dean of Faculty of Computer Engineering in University of Isfahan. His research interests include Software Security, Formal Specification and Verification, Soft Security, and Computational Trust. He is member of Iranian Society of Cryptology (ISC).

    View full text