Introduction

With the emergence of technologies such as cloud computing, fog computing, and edge computing, the world has moved towards the centralization of data. Cloud service providers enable the remote storage of data centrally which could be accessed by all devices, thus removing the dependency of operating system (OS) and filesystem [3]. A single instance of a file can be accessed on a laptop, smartphone, and tablet, hence providing smart solutions to access the data. With the inception of IoT smart devices generating relatively large volumes of data, the Cloud can handle the data storage and processing needs, but the main bottleneck is the bandwidth of the network that carries data to the Cloud and back when required. Processes demanding real-time processing on the data require low response times. If each device sends real-time data to the Cloud for processing, the response time would increase exponentially. Therefore, to cater to such needs, a cloudlet or data center is required which can provide the IoT devices the ability to store and process data near to their locations. This technique is known as edge computing.

Edge computing is the paradigm shift in the cloud computing. It changes the approach of using the Cloud with IoT devices, ensuring real-time processing by providing a cloudlet/data center near the edge of the data source [2]. This layer accommodates devices that can perform analysis and data storage. The storage on these devices can either be permanent or temporary depending upon the nature of implementation [1]. This is where edge computing plays a vital role by minimizing problems such as low battery constraints, saving bandwidth, maximizing response time, privacy, and data safety, even though its roots go back to 1990 with the introduction of Content Delivery Networks (CDNs) [4]. Edge computing technology introduces an intermediate layer between the Cloud and the IoT devices.

A general view of edge computing architecture [5] shows that it comprises of four functional layers: edge devices, edge networks, edge servers, and core infrastructure. These layers perform the following functions:

  1. 1.

    Edge devices (End users): Edge network includes numerous devices connected to the edge network which are data producers as well as consumers, e.g., IoT devices.

  2. 2.

    Edge network: Whole infrastructure including servers, devices, and core infrastructure is connected by internet, data center network, and wireless network.

  3. 3.

    Edge servers: Infrastructure providers own and provide edge servers which are responsible for delivering virtualized managing services. Edge data centers are also deployed which are connected to the traditional Cloud.

  4. 4.

    Core network: Network access such as internet, mobile network, management functions and computing services by centralized Cloud is provided by core network.

Along with multiple opportunities provided by edge computing, there are many challenges such as data security and privacy [5], trust, edge node computation, offloading and partitioning, service quality, deployment strategies, workload, and policies [6]. Trust is the final overall grade of the device, dependent upon the ratings provided by other devices. Trust can be viewed as opinion of the community regarding a device based on all their interactions with that device. The focus of this research will be to highlight trust management issues in Edge Computing architecture, study existing trust management systems developed for edge computing, and finally propose a trust management system and to evaluate existing schemes with the proposed scheme. As our reliance on IoT devices is increasing, the shift towards edge technology is inevitable [7]. In IoT, the trust of the smart devices could be maintained by the Cloud, whereas in edge computing, the smart devices intercommunicate data with each other to mitigate latency [8]. Therefore, a trust management mechanism is required to maintain the legitimacy of the device and the data provided by these devices, to detect data/information flow from rogue devices.

Contributions

Through this research, the following contributions are made:

  • Highlighting of trust management issues in edge computing architecture, performing a comprehensive study of existing systems and models already proposed for solving issues regarding trust management.

  • Proposing a novel trust management system to ensure the security of data and strong trust reliance on edge devices for fast and efficient communication and processing.

  • Implementation and evaluation of the proposed trust management system and comparison of existing systems with the proposed model.

The remaining paper is organized as follows: “Literature review” gives a comparative analysis of the existing trust models. “Proposed trust management model” describes the architecture and working of the proposed framework and explains the proposed methodology. “Implementation and results” shows the results obtained after the implementation of our model. “Conclusion” concludes the paper and highlights future directions.

Literature review

Security risks such as replay attacks, message tampering, and forging are always there to discourage users from utilizing edge nodes for computing [9]. Hence, there is a great urge to maintain the trust of edge computing and many trust models are developed by scholars to tackle this problem.

Yuan and Li [11] introduced a trust model solely for serving edge computing that can be used for computing at a larger scale. Their multi-source feedback model adopted the idea of global trust degree (GTD) which is direct trust and feedback trust from brokers and edge nodes, by introducing three main layers: network, broker, and the device layer. Feedback was generated from edge devices as well as service brokers, hence named as multisource. Global convergence time (GCT) was used for the evaluation of efficiency. Experiments were conducted using NetLogo event simulator and personalized similarity measure (PSM). To compute reliability, task failure ratio (TFR) was computed.

Ruan et al. [10] proposed a trust model to evaluate applications as well as a node in a network which additionally in real-time help in resource configuration using measurement theory. Two metrics were defined: one to measure the quality of probability termed as trustworthiness and to evaluate trustworthiness with measure error, confidence was introduced. In the framework, multiple levels of trust were taken into consideration, such as trust of devices, tasks, and device-to-device trust. Apart from this, a new trust assessment algorithm was introduced to evaluate the model and a dynamic way to allocate resources for a task using a trust threshold value and avoiding redundancy. Further results and implementation were not presented. Alsenani et al. [12] demonstrated a model, SaRa (A Stochastic Model to Estimate Reliability of Edge Resources in Volunteer Cloud) targeting volunteer cloud computing and in this scenario CuCloud (Volunteer Computing as a Service (VCaaS) System), which was a client/server architecture including volunteer machines and dedicated servers. It was a probabilistic model that was based on the estimation of the reliability of nodes by exploiting the behavior of nodes. Main parameters included task behavior and characteristics, e.g., success, fail, priority, etc. Although the model had a random probability distribution, to validate this approach, Google clusters were used and the testing environment contained hundreds of machines. Compared to other probabilistic models [13], SaRa achieved greater precision.

There is a constant need that manufacturers of smart services provide configuration updates, control commands, and send and receive status information. In this case, Industrial IoT controllers need to protect themselves from unauthorized tampering and ensure the accuracy of inputs. To tackle such a problem, Pinto et al. [14] demonstrated a trust mechanism for edge devices in an industrial IoT environment to achieve confidentiality, integrity, and authenticity at both hardware and software levels. Since Trust Zone is gaining massive attention due to Advanced RISC Machine (ARM) processors, the usage of Trust zone-based architecture was a sensible choice that implemented a Trusted Execution Environment (TEE) in a slightly modified Real-Time Operating System (RTOS) but no further implementation was presented. Mobile phones connected to wireless networks comprise more than half of IoT devices and these devices are vulnerable to many security threats. Therefore, Rehiman and Veni [15] focused on a Model for data privacy and proposed a trust management framework for all three layers of IoT architecture which are application, network, and sensor layers. The architecture revolved around a security manager with enough memory and processing capacity to perform all tasks, minimizing overload for resource-constrained devices. Zero-knowledge protocol, access control mechanism, context-aware location privacy and Elliptic Curve Cryptosystem were some of the techniques used by the system for authentication along with public key generation and distribution by the security manager. Moreover, for packets anonymity and confidentiality, layer encryption scheme and data origin authentication scheme were proposed in the model. The model was simple and addressed all challenges, but no proper demonstration and evaluation was carried out. Furthermore, more than half of the computation was carried out by the security manager which, if it fails, brings down the whole system.

A trust management model based on centralized architecture for IoT was proposed by Alshehri and Hussain [16] that relied on a supernode that works like that of a router and additionally monitors the whole network in a clustered environment led by master nodes, supervising cluster nodes. Supernode consists of three modules: (a) Application Programming Interface (API) module that provides an interface between cluster nodes and master nodes for communication; (b) trust management module that allows trust communication between supernodes, master nodes, and cluster nodes by providing authentication data; and (c) trust communication module that allows two types of communication consisting of trust messages and value messages for establishing trust values. This model is unique for suggesting a centralized approach, but it also comes with its disadvantages. Alshehri and Hussain [28] proposed a trust management system based on Fuzzy security protocol. The trust metrics used takes direct and indirect trust scores and routing scores into consideration. Also, in [29], Alshehri et al. proposed a distributed trust management model inspired by the clustering technique consisting of Cluster Node component and the Master Node component.

Kim and Keum [17] proposed an IoT trust domain to protect IoT infrastructure from malicious attacks through a trustworthy gateway system. This system could be used for smart homes and smart offices. The system used a gateway system to pass the IP addresses, which were then converted to IDs. ID table was used as a repository to store ID information of all connected devices in the network. Theoretically, the system worked fine compared to an untrusted domain, but no implementation was provided.

Asiri and Miri [18] presented a model that used distributed neural networks to classify trustworthy nodes. In this model, they defined Alpha nodes that are more capable of controlling hubs, managing jobs, and do not frequently change. The functionality of nodes was considered to make clusters. The type of data being transmitted was considered at the profiling phase and used as one of the parameters for data security. Trustworthiness was determined based on the threshold rating provided by nodes. The main phases included data collection, virtual clustering, weight calculation, transaction, trust computation, node classification, and rate apprise. Though the system seems reliable, no general demonstration of the model was presented.

Mendoza and Kleinschmidt [19] offered a model that identified malicious nodes based on the services they chose to provide. At first, all nodes were assigned zero trust value and the process of neighbor discovery was started by sending announcement packets. When a node provided a service, its trust value increased and if the node was unable to do so, then its trust value decreased. This trust scheme was implemented in Cooja simulation provided by Contiki OS and detection of malicious nodes was successful. To share information, nodes share credentials for verification involving third-party intrusion. However, in a tactical environment such as search and rescue missions or military operations, access provision is limited. Also, reliability on hardware and early share of credentials is not possible. To solve these problems, Echeverría et al. [20] proposed a model that uses key generation and distribution for disconnected environments using tactical cloudlets that allow data-staging, filtering, forward deploying, and data collection points. The proposed trust solution uses Identity-based Cryptography (IBC), Stanford Identity-based Encryption (IBE), and OpenSSL ciphers. To evaluate the system, a threat model by Microsoft Security Development Lifecycle (SDL) is used which proposes 60 potential threats out of which 14 are considered for the tactical environment. After implementation using open source tactical cloudlets 12 out of 14 threats were fully and partially handled.

Sharma et al. [21] proposed a generic trust management framework for IoT infrastructure. It defined all requirements to compute the trust of edge devices with update and maintenance. A unique concept of trustor and trustee was considered to evaluate the system. The whole system consisted of four phases: (a) the first phase gathered information through different parameters such as experience, reputation, and knowledge; (b) models were used for trust computation including machine learning, flow, fuzzy, probabilistic, and statistical models; (c) two architectures centralized and decentralized, which were used for trust dissemination; (d) the final phase included update and maintenance which occurred in an event-driven and time-driven scenario. Since it is a generic framework, no proper implementation was presented.

Wang et al. [22] defined trust mechanisms as self-organizing items that take an informed decision based on trust status considering three main elements which are service, decision making, and self-organizing. In a typical IoT infrastructure, three main networks and layers are used mainly, named as a sensor, core, and application. The model used formal semantics-based language and fuzzy set theory to form trust. Results achieved were consistent with an ideal situation and even though no demonstration was given for the working implementation of the model, it laid the foundation for future models for IoT layered architecture.

Kagal et al. [23] presented a trust management scheme that restricted the redelegation of task without following a delegation protocol and dealt with permissions in a distributed environment of supply chain management. For this purpose, CIIMPLEX EECOMS is chosen as an experimental environment and security agents were used for verification and authentication based on ID and verification certificates by Certification Authority (CA) which further were used as tickets for access to resources. Permission to delegate a task to or by the agent was also given by security agents and a log of delegations was maintained using Prolog. Delegations addresses are group, time-bound, action restricted, strictly re-delegatable, and re-delegatable delegations.

Even though the existing approaches have brought us new concepts to evaluate trust, but the support of these approaches is weak as the implementation of the proposed framework is weak and, in some cases, not present. Moreover, the Quality of Service (QoS) parameters used in the presented model proposes novelty along with implementation in real-time yielding better results.

A detailed analysis of the existing models is shown in Table 1.

Table 1 Analysis of models

Proposed trust management model

The architecture of the proposed trust management model is shown in Fig. 1. The proposed model consists of two main modules: a rating management module and a trust calculation module.

The rating module computes ratings based on QoS parameters and multicriteria decision analysis, which in turn produces a covariance matrix and calculates Singular Value Decomposition (SVD), and Principal Component Analysis (PCA) vectors in the component analysis module. Data are transferred to the prediction module which predicts the trust for the requested device. Edge computing technology has vast applications, due to its ability to provide various benefits such as low latency, Cloud offloading, and saving bandwidth; thus, it will be adopted globally by replacing the existing IoT infrastructure, where edge computing has its benefits. It also suffers from problems such as maintaining the reliability of data provided by the devices, and thus trust management becomes an important factor that provides some insight into the device, whose data are being received. In environments such as IoT and edge computing, the devices are unaware of each other’s location, and intention. A device could be sending malicious and wrongful data to other devices or causing problems such as DoS in the network. To reduce the impact of such devices on the network, a lightweight trust management model is required which calculates the trust based on the ratings of the device. The proposed model calculates the trust based on the ratings provided by the other devices where each device maintains its rating table for the devices it is communicating with. Similarly, edge servers or data centers also maintain a rating table that stores ratings from all the devices. Trust is calculated based upon those ratings derived from the quality of service parameters. Devices exhibiting bad QoS parameters can be classified as malicious based on their network behavior, as they could be engaged in an active DoS attack [27].

Fig. 1
figure 1

Architecture of trust management model

An overview of the process is explained in a flowchart presented in Fig. 2. In the first phase, a connection is established and communication starts between edge devices. Rating is calculated based on these communications by computing the covariance matrix and single vector decomposition. The trust is predicted, and if the device is new, the cycle repeats at least five times to calculate the average trust for devices. A detailed description is presented below:

Fig. 2
figure 2

Flow diagram of proposed system

QoS parameters

In an edge computing model, each device can act as a server or a client. When a device is providing a service or data it is categorized as a server device, and in other scenarios where the device is gaining a service or data from another device, it is categorized as a client device. Based on this criterion, in our system, each client device at the end of communications would provide feedback on QoS parameters. The QoS parameters selected are utilized in classifying the network traffic pattern under DDoS attack in a real-time network [27]. Therefore, we can assume that devices that have bad QoS parameters are either faulty or malicious.

Ratings are derived from these QoS parameters using the multi-criteria decision analysis technique, and overall device trust is based upon these ratings. Trust in this model is an arithmetic value that lies in the range from 0 to 5, where 5 is an extremely trustworthy device and 0 is an extremely untrustworthy device.

The processes of our system start after every device has communicated at least once with each other. During the cold start, all devices connected to an edge server are registered by each edge server, and the information is forwarded to the Cloud. After registration, the system would be in observation mode where the observation mode would last for one transaction for each device.

Packet loss percentage

It is defined as the number of packets lost during the communication between two devices. High packet loss is considered bad for the network, and it would negatively effect the device rating, whereas low packet loss percentage is considered good for the network and positively effects the device rating. It can be calculated using (1) [20].

$$\begin{aligned} \text {PacketLossPercentage}= \frac{\sum _{i=0}^{n}\text {PL}}{\sum _{i=0}^{n}\text {PS}}*100, \end{aligned}$$
(1)

where \(\text {PL}=\) packets lost and \(\text {PS}=\) total packets sent.

Latency

Latency can be defined as the amount of time required for a packet to be transmitted from source to the destination. Latency depends on the congestion in the network. During the periods of high congestion, latency increases causing low rating. It can be calculated using (2) [20]

$$\begin{aligned} \text {Latency}=\sum {(\text {PATime}_i-\text {PSTime}_i)}, \end{aligned}$$
(2)

where \(\text {PATime}_i=\) packet arrival time and \(\text {PSTime}_i=\) packet send time.

Jitter (packet delay)

The variation in the time between the arrival of packets reaching the destination in a particular time frame. It indicates the consistency and stability of the network. It can be calculated by (3) [20]:

$$\begin{aligned} \text {Jitter}=\sum _{i=0}^{n}{\left( \frac{{\text {Delay}}_i-\text {Delay}}{N}\right) }. \end{aligned}$$
(3)

Throughput

Throughput is the number of bytes transferred from the source to destination. It is measured in bits per seconds unit (bps) using (4) [20]as follows:

$$\begin{aligned} \text {Throughput}=\frac{\sum _{i=0}^{n}{(\text {Packets Received})}}{\sum _{i=0}^{n}{(\text {StartTime}-\text {StopTime})}}. \end{aligned}$$
(4)

Task failure ratio

Number of tasks that have been failed to be received by the client or generated by the server. This parameter is dependent upon the applications that are being run on the network. It can be calculated using (5):

$$\begin{aligned} \text {pft}=\left( \frac{\text {failed transactions}}{\text {total number of transactions}}\right) \times 100. \end{aligned}$$
(5)

Multi-criteria decision analysis (MCDA)

The multi-criteria decision technique is used to decide on the best selection among various options based on the preferences of certain criteria. We explain MCDA in the scenario of our implemented system.

Defining criteria

Multiple quality of service parameters can be considered when communication is established between edge devices. Our experiment is based on the defined criteria as shown in Table 2.

Table 2 Criteria for predicting ratings

Each parameter consists of some criteria that have a range of scores based on importance of resulting values [24].

Table 3 Points division for each rating criteria
Table 4 Sample alternatives depicting rating criteria values

We are considering five parameters including latency, packet loss percentage, jitter, throughput and task failure ratio. All these parameters have different units and they have separate criteria for contribution in calculation of ratings [30]. Hence, we define good and bad criteria for defining these parameters as depicted in Table 3. As shown in the table, those parameters which have an inverse effect on the performance of devices are already scored in inverse form; hence the beneficial criteria do not need to be divided with minimum value to make all parameters comparable. The measurement criteria presented in this table are based on research presented in [25].

When communication is established between devices, QoS parameters are recorded. We are calculating these parameters using edge computing simulation. As four parameters, i.e., jitter, packet loss percentage, task failure rate and Slatency are non-beneficial criteria in the contribution of ratings, their minimum values achieve highest score, whereas throughput is a beneficial parameter and hence its higher values get maximum score. The scoring is allocated from 1 to 5 because we have 5 parameters and want to get the final rating up to 5. For alternative scenarios where the user is considering more than 5 parameters, the scores can also be increased and vice versa.

After computing the score, we get values from 1 to 5 which are in the same unit; this keeps us from taking a range of values and normalizing it to get a weighted normalized decision matrix. Rating is denoted as follows:

$$\begin{aligned} R_i=\sum _{j=1}^{n}w_{ij}a_{ij}. \end{aligned}$$
(6)

Sum of weight of j number of parameters multiplied by j number of scores of parameters of device i, equals the rating of device i. The criteria weight is determined between 1 and 100% for each parameter based on its importance according to the scenario. The final rating obtained is used in the calculation of the average trust of a device.

Discussion on the proposed methodology

This section further elaborates our proposed scheme as an example scenario created from a subset of data, extracted from our main simulation, on the basis of best and worst case scenarios.

Calculating QoS parameters

Methodology for calculating QoS parameters is as follows: Determining Alternatives Values of QoS parameters are extracted from our main simulation as represented in Table 4. The values of each device are portrayed such that all the scenarios are covered ranging from best case scenario to worst case scenario of scores.

Assigning weight: We assign a relative weight to each parameter based on their importance in a given scenario. Weight of each QoS parameters can be assigned as per the requirement of the network. As for some networks, throughput of the devices would be much more important than other parameters and for others low task failure ratio would be more desirable. These values can be tuned according to the needs.

The sum of all weights must be equal to 1:

$$\begin{aligned} \sum _{i=1}^{n}{w_i=1}. \end{aligned}$$
(7)
Table 5 Weightage for rating calculation

We assign more weightage to those parameters which hold a strong position in the evaluation of trust among devices. The sum of weightage is always 100%. In this scenario, we have assigned an average weight, i.e. 0.20 to all parameters as shown in Table 5.

Value of scores: The parameters of each device are assigned a score based on its values recorded during the communication session described in Table 6.

Table 6 Scores calculated based on sample alternatives

Final score: Multiply the weight assigned to each parameter with its score using (8) as shown in Table 7.

$$\begin{aligned} R_i=w_ia_i. \end{aligned}$$
(8)
Table 7 Multiplication of scores (Table 6) and weightage (Table 5)

Final ratings: Final ratings for each device are obtained by sum of all final scores of QoS parameters of the device using (9) and (10) evaluated in Table 8:

$$\begin{aligned} R_{ij}= & {} \sum _{j=1}^{n}w_{ij}a_{ij} \end{aligned}$$
(9)
$$\begin{aligned} R_{ij}= & {} w_t\left( T_{ij}\right) +w_p \left( P_{ij}\right) +w_j\left( J_{ij}\right) \nonumber \\&+w_t\left( L_{ij}\right) +w_f\left( F_{ij}\right) . \end{aligned}$$
(10)
Table 8 Rating of devices

Table 9 shows the final ratings obtained for each device. It is observed that devices with lower scores have low ratings, whereas devices with higher score have high ratings.

Single vector decomposition

It is a matrix factorization technique, used mainly for dimensional reduction. It is used to reduce the dimensions of large data sets, while preserving as much information as possible. It can also be used in collaborative filtering [36]; collaborative filtering is a technique which predicts user preferences in a recommender system based upon the past user preferences [21]. In our system, we use collaborative filtering to find out trust of the device.

Table 9 Final rating obtained by adding criteria for each device

There are two main scores awarded to a device in the proposed scheme:

  1. 1.

    Ratings: The rating of a device is a value determined based on the communication between two devices. A device would be rated on the basis of its QoS parameters using formula 10. Ratings can be viewed as an opinion of one device based on its interaction.

  2. 2.

    Trust: Trust, on the other hand is the final overall grade of the device, dependent upon the ratings provided by other devices. Trust can be viewed as opinion of the community regarding a device based on all their interactions with that device.

The main difference between a rating and trust is that rating is computed by factors deriving from one-to-one communication between two devices. The rating would distinguish between the device being good or bad based on the analysis of one device. Even if many devices rate a single device, it would still lack the factor of input from the community.

In singular value decomposition, we take a rectangular matrix \(X\times Y\) and decompose this matrix into three other matrices. Rating matrix serves as an input for SVD. In a rating matrix, all individual ratings of the device are mapped. Each device being rated is mapped in columns and rating of the respective device is mapped in rows. (Matrix is already given in Table 10)

$$\begin{aligned} A=USV^{{\mathrm{T}}}. \end{aligned}$$
(11)

Since U is an \(X\times Y\) orthogonal matrix so \(U^TU= I_{n\times n}\). V is also an \(X\times X\) orthogonal matrix hence \(V^TV=I_{p\times p}\). Here I is the identity matrix. The diagonals of identity matrix are 1; all other values are 0.

Covariance matrix: Convergence matrix is calculated by combining the rating vectors. This step helps identify how variables are correlated. The convergence matrix is a symmetric matrix. To achieve this symmetry the following formula is utilized:

$$\begin{aligned} A=A\times \text {transpose}\left( A\right) \end{aligned}$$

let:

$$\begin{aligned}&A = \left[ \begin{matrix}0&{}\quad 1\\ 1&{}\quad 1\\ 1&{}\quad 0\\ \end{matrix}\right] \Rightarrow AA^{{\mathrm{T}}}=\ \left[ \begin{matrix}0&{}\quad 1\\ 1&{}\quad 1\\ 1&{}\quad 0\\ \end{matrix}\right] \left[ \begin{matrix}0&{}\quad 1&{}\quad 1\\ 1&{}\quad 1&{}\quad 0\\ \end{matrix}\right] \nonumber \\&\qquad = \left[ \begin{matrix}1&{}\quad 1&{}\quad 0\\ 1&{}\quad 2&{}\quad 1\\ 0&{}\quad 1&{}\quad 1\\ \end{matrix}\right] = B. \end{aligned}$$
(12)

Compute eigenvalues of A

At the end of the process, the average trust of each device is calculated so that when a device which did not have a direct transaction with that device, it could determine the trust based on the past interactions of other devices. When a new device \(d_i\) enters the network, it is registered by the Cloud, and before it initiates communication with other device \(d_j\), it could check the average trust value of that device, if it is higher than the threshold value 2.5. it is considered as a trusted device by the community or by the devices it has previously had transactions with. At the end of the communication, if \(d_i\) was a server device, it would be rated by the \(d_j\). and these ratings would be forwarded to incremental Singular Vector Decomposition. Incremental SVD would find its trust based on the communications it had with other devices.

$$\begin{aligned} B\ x\ \lambda =\lambda \ x \end{aligned}$$

so,

$$\begin{aligned} (B - \lambda I ) x = 0. \end{aligned}$$
(13)

Matrix reconstruction

The most significant eigen vectors are utilized to construct the final matrix. This matrix represents the final predicted trust values. The predicted values depend upon a criterion, i.e., the total number of the generated eigen vectors to be utilized, as the eigen vectors are arranged in descending order. Here, the first number shows highest significance as compared to the remaining values. During the matrix reconstruction the Dot Product ratings, Eigen values and ratings are calculated

$$\begin{aligned} A=\text {Dot}\left( \text {ratings},\text {transpose}\left( U\right) \right) . \end{aligned}$$
(14)

Sort values by most significant number selected by some criteria. Values after certain threshold are discarded.

$$\begin{aligned} \text {PrT}=\left[ A_{mxn}\right] \left[ U_n\right] . \end{aligned}$$
(15)

Incremental singular vector decomposition

The established network, up till now the network, has been setup with a fixed number of devices. One main feature to be tracked is, what happens when a new device k starts communicating with the edge nodes of our system. Of course, a whole new system cannot be established from scratch to observe the trust level for this new device k. Therefore, we have implemented a technique of incremental SVD for predicting the trust of the latest added devices to the network. This method is a continuity of SVD which we have presented previously, represented as Eq. (16).

We have the rating matrix R whose columns contain ratings of the devices.

Let

$$\begin{aligned} Z=\frac{U}{R}= U^{\mathrm{T}}R. \end{aligned}$$
(16)

This is the orthogonal projection of R into U known as eigen coding.

Let

$$\begin{aligned} H =(I - UU^{{\mathrm{T}}})R = R - UZ. \end{aligned}$$
(17)

This is the component of R which is orthogonal to the subspace spanned by R and I is the identity matrix.

Let

$$\begin{aligned} X = \frac{K}{H} = K^{\mathrm{T}}H. \end{aligned}$$
(18)

In (19), K is an orthogonal basis of H and X is the projection of R onto the space orthogonal to U.

(19)

As in single vector decomposition, the left and right matrices in the product are unitary and orthogonal. The middle matrix, denoted as D is a diagonal with a c-column border. We need to diagonalize D to update SVD.

$$\begin{aligned} U^{\prime }\text {diag}(s)\text {`}V^{{\mathrm{T}}} \xleftarrow {\text {SVD}} D. \end{aligned}$$
(20)

Devices with high score in QoS parameters have high average ratings, while devices with low QoS parameters have low average ratings.

Implementation and results

The proposed model is composed of two main modules, ratings module and trust calculation module. For ratings module, we are employing multi-criteria decision analysis approach and for the trust calculation module we are using singular vector decomposition. Trust is derived directly from ratings. For new devices on the network incremental SVD algorithm is employed.

The proposed system is implemented in MATLAB which supports matrix operations such as transpose and SVD. For simulating the communication between devices, we are using EdgeCloudSim, which is implemented in Java. EdgeCloudSim provides us with values which enable us to calculate our QoS parameters and derive our ratings by employing multi-criteria decision analysis technique. After extraction of the required parameters from EdgeCloudSim, they are stored in MySQL database which is accessed via a XAMPP server. After the ratings are calculated, they are then exported into MATLAB via CSV file for performing matrix operations.

Implementation

At first, we have computed trust for 10 devices and 100 after that. The need for an edge computing network to be trusted arises from the fact that it is a decentralized architecture, we assume that the Cloud is a trusted entity. Every device that is connected to the edge computing network is profiled. Our system works based on client and server, in edge computing a device can act as both a client and a server. Client devices have the power of server devices, so that after every service provided by the server device it is being rated based on its QoS parameters. These QoS parameters are translated into the ratings, ratings are being saved in our database. These ratings are then exported to MATLAB as a csv file, where we first calculate the transpose of the rating matrix R using Eq. (12).

This would result in a square matrix. This is a necessary requirement to determine the eigen vectors of a matrix. When SVD of a device is calculated, three parameters are gained from a single matrix. SVD is a method of decomposing a matrix into three other matrices represented in Eq. (11).

Example scenario

For the sake of example, we have taken, a subset of our main experiment, and included an example scenario where the communications between 10 devices are recorded.

Rating matrix

Rating matrix R was generated from the quality of service parameters as shown in Eq. (6). In our rating matrix, we can observe that some values are 0. This implies that devices have not communicated with each other.

This gives us a \(9 \times 10\) rating matrix for 10 devices as shown in Table 10. To convert this into a symmetric matrix of \(10 \times 10\); we multiply matrix R with \(R^{{\mathrm{T}}}\). This is because eigen values are generated only of a square matrix.

$$\begin{aligned} R (10 \times 10) = R R^{{\mathrm{T}}}. \end{aligned}$$
(21)
Table 10 Rating matrix after calculation of final ratings

Singular value decomposition

Singular value decomposition [26] technique is used to generate three other matrices from R using Eq. (11).

Since U is an \(X\times Y\) orthogonal matrix so \(U^{{\mathrm{T}}} U= I_{(n\times n)}\). V is also an \(X\times X\) orthogonal matrix hence \(V^{{\mathrm{T}}} V= I_{(p\times p)}\). Here I is the identity matrix. The diagonals of identity matrix are 1, all other values are 0.

$$\begin{aligned} R R^{{\mathrm{T}}} = USV^{{\mathrm{T}}} (USV^{{\mathrm{T}}} ) = US^2 V^{{\mathrm{T}}} R R^{{\mathrm{T}}} V=VS^2, \end{aligned}$$
(22)

where V contains all eigenvectors and \(VS^2\) contains all eigen values. Table 11 shows the experimental results from our implemented system.

Table 11 Resultant SVD experimental results U

S is a diagonal matrix which has entries only along the diagonal. It contains square roots of all eigenvalues of R \(R^{{\mathrm{T}}}\) as shown in Table 12.

Table 12 Resultant SVD experimental results S

V is also an orthogonal matrix which contains eigen vectors of R \(R^{{\mathrm{T}}}\) as described in Tables 13 and 14 and shows the predicted trust after matrix reconstruction.

Table 13 Resultant SVD experimental results V
Table 14 Predicted trust after matrix reconstruction
Fig. 3
figure 3

Trust calculation graph

Figure 3 presents the experimental results of trust calculation in our simulation system. These results were generated from the initial step by including ten devices for the experiment.

As it can be observed from the graph peaks that certain devices, i.e., device number 3, 5, 7 and 9, have comparatively higher trust value and other devices, i.e., device 1, 2, 4, 6, 8 and 10 have lower trust values. These results support our study explained that those devices which had high ratings based on quality of service parameters have turned out to be more trustworthy compared to low rated devices which had gained less score in the initial steps and have yielded lower values of trust (Table 15).

Table 15 Predicted trust P1 given by new device

Experimental results

In our experiment, we have taken a network of 100 devices. All these devices communicate with each other in our simulation environment. Data from this simulation is extracted and QoS parameters are calculated, these parameters are gathered using multi-criteria decision analysis, bar chart of average ratings, average trust and scatter diagram of ratings and trust are shown as follows:

Average trust graph is shown in Fig. 4. The trust was computed according to the criteria previously presented in this research, the average ratings provide us a preliminary view of where device overall trust would fall. Thus, Fig. 4 provides us a general overview of the feedback given to a device by devices which it previously interacted with, in the form of ratings. Comparing both graphs in Figs. 4 and 6, we observe that devices have both positive correlation and negative correlation.

Fig. 4
figure 4

Average rating graph

Fig. 5
figure 5

Ratings scatter graph

Fig. 6
figure 6

Average trust graph

Fig. 7
figure 7

Trust scatter graph

Figure 5 represents the ratings scatter graph for 1000 devices in a network. This scatter graph shows the position of the ratings for each device. In the sample space, according to the trends shown in this figure, most ratings lie in the middle, whereas the lowest ratings are near to 1.5 and a few devices have the highest rating of 5. Most of the ratings lie at an average distance from each other. Ratings show the estimation based on QoS parameters, but these ratings lack the factor of community input. Community factor is an important aspect when calculating trust rating. While comparing Figs. 4 and 6, a general overview of a trend can be observed, true picture and difference between ratings and trust can be observed by comparing the scatter graph shown in Figs. 5 and 7. The x-axis of the scatter graph show the devices, whereas the y-axis show ratings or trust in the case of Fig. 7. While the saturation in Fig. 5 is spread widely which represents the individual rating given to a device, whereas in Fig. 7 the saturation is near the mean position which reflects the community factor affecting the trust value given by each device.

Referring to the final predicted trust, our simulation in Fig. 6 shows that the average trust of each device is more than the mean value 2.5. If there is a device whose trust level falls below the mean level, such devices may be engaged in active DOS attacks. There are several approaches that could be utilized to reduce the impact of such devices on the network, and data.

  • Such devices could either be allocated low network resources so their immediate impact may be removed from the network allowing other devices to continue communication smoothly.

  • Such devices could be placed into isolation until the devices are checked for malfunctions, bad configuration, or patched for a vulnerability.

Conclusion

Edge computing as we know it today is an emerging technology where the generation, distribution, storage and computation of data is performed at the edge of the network. A big concern of bandwidth in cloud computing is also resolved using edge computing but new concerns such as privacy, security, latency, computation power at the edge and offloading need to be addressed. This research targets the most significant issue of the security and reliability of edge devices by proposing a trust management model to evaluate the credibility of edge nodes.The proposed model calculates the trust based on the ratings provided by the other devices. Each device maintains its rating table for the devices it is communicating with. Similarly, edge servers or data centers also maintain a rating table which stores ratings from all the devices. Trust is calculated based upon those ratings depending on the quality of service parameters such as packet loss percentage, latency, jitter, throughput and task failure ratio. Each parameter consists of some criteria that have a range of scores based on the importance of the resulting values and the weight is assigned to the devices accordingly. Trust management models using QoS parameters show improved results that can help identify malicious edge nodes in edge computing networks and can be used for industrial purposes.