Abstract

Throughput performance is a critical issue in blockchain technology, especially in blockchain sharding systems. Although sharding proposals can improve transaction throughput by parallel processing, the essence of each shard is still a small blockchain. Using serial execution of smart contract transactions, performance has not significantly improved, and there is still room for improvement. A smart contract concurrent execution strategy based on concurrency degree optimization is proposed for performance optimization within a single shard. This strategy is applied to each shard. First, it characterizes the conflicting contract feature information by executing a smart contract, analyzing the factors that affect the concurrent execution of the smart contracts, and clustering the contract transaction. Second, in shards with high transaction frequency, considering the execution time, conflict rate, and available resources of contract transactions, finding a serializable schedule of contract transactions by redundant computation and a Variable Shadow Speculative Concurrency Control (SCC-VS) algorithm for smart contract scheduling is proposed. Finally, experimental results show that the strategy increases the concurrency of smart contract execution by 39% on average and the transaction throughput of the whole system by 21% on average.

1. Introduction

Blockchain technology can be described as a distributed append-only ledger over a large peer-to-peer (P2P) network and has demonstrated great promise for utility in several fields including the Internet of Things (IoT), financial assets, the sharing economy, and copyright maintenance [13]. However, with the increasing transaction scale on the blockchain, the performance defects of the current blockchain platform are gradually being exposed (e.g., low throughput and lack of concurrency) [46], and the current platform is increasingly unable to meet the needs of large-scale applications. As an effective means to improve system performance, sharding proposals are applied to the blockchain system. Elastico is the earliest transaction sharding proposal facing the public chain. It divides the nodes in the network into multiple independent shards; each shard contains multiple nodes so that different shards can process irrelevant transactions in parallel and linearly improve the processing ability of the blockchain system. Although the introduction of sharding proposals can realize parallel processing between shards [79], the essence of each shard is still a small blockchain system, and a smart contract (SC) is executed in a serial way [1012]. There is no significant improvement in the internal performance of the shard, so the SC may be limited by the performance of the shard. If the address within one shard has a high transaction frequency, the shard will generate a large amount of transaction information, which will lead to increased data conflicts while causing shard congestion [13].

To improve the performance of a single shard, we propose an SC concurrent execution strategy. First, the information of SC characteristics in a conflict is recorded, the collected information is used as an important reference factor to solve contract conflicts, and the subsequent smart contract transactions (SCTs) are clustered. Second, to implement concurrent execution of the optimized processed transaction set, we propose a Variable Shadow Speculative Concurrency Control (SCC-VS) algorithm, which comprehensively considers the SC execution time Et, conflict rate Cr, and available resources R, relying on redundant computation to find a serializable scheduling, which effectively solves the performance degradation problem caused by the increased number of SCTs.

We summarize the contributions of this paper as follows:(1)We propose a feature information collection technology for SC. This method makes full use of the information resources of SC, records real-time statistics of the SC feature information that conflicts occur in the TSM-Module, and collects the feature information as the next important reference factor to resolve the SC conflict.(2)We design a clustering technology for SCTs. First, the SCTs are initially partitioned by traversing the collected feature information. Second, the execution time Et and conflict rate Cr of SCTs are predicted. Based on the predicted values, the aggregate function is used to divide again. Finally, we obtain three sets: Set_δ, Set_λ, and Set_μ (Set_δ, Set_λ, and Set_μ are obtained by the Concurrency Degree Optimization Processing Module, and the priority of execution in the Transaction Scheduling Management Module is from high to low). By changing the distribution of SCTs, the optimal processing of their concurrency is realized.(3)We propose an SCC-VS algorithm for SC scheduling. This algorithm comprehensively considers the three dimensions of SCTs: execution time Et, conflict rate Cr, and available resources R. It relies on redundant computation to find a serializable scheduling and executes the optimized SCTs concurrently. The problem of transaction blocking and restart caused by the existing methods is alleviated, and the system resources are effectively utilized.(4)We implement the prototypes of the Concurrency Degree Optimization Processing Module (CDO-Module) and the Transaction Scheduling Management Module (TSM-Module) and apply them to the test environment. Many simulation experiments are run to verify the performance Turing improvement.

The paper is organized as follows: In Section 2, we review related work. In Section 3, we present a CDO-Module and explain its implementation in detail. In Section 4, we present a TSM-Module and analyze the theory. We present simulations and evaluations in Section 5 and conclude in Section 6.

One important way to improve the performance of blockchain systems is to realize the concurrent execution of SC. In [14], Luu et al. first proposed introducing a secure sharding protocol into the public chain platform, aiming to build a sharding network structure that can achieve parallel computing. Although that approach improves the throughput to a certain extent, it uses a non-Turing complete language to create SC, and the flexibility of SC is insufficient and restricted in some cases. Dickerson et al. proposed a two-stage concurrent execution protocol of SCs based on the Lock method [15], which aimed to improve the performance of SC execution. However, since the Lock method belongs to a pessimistic concurrency control algorithm, poor scalability, serious blocking problems will occur when the conflict rate between SCTs is high; the experiment proves that the efficiency is not high. Anjana et al. proposed another SC execution framework based on the timestamp ordering method [16], allowing SCTs to be executed concurrently in an optimistic manner, optimistic concurrency control can be performed under low-conflict loads because lock synchronization is avoided. If frequent data conflicts occur between SCs, many transaction restarts will be caused. The multiversion concurrency control mechanism proposed by Zhang and Zhang [17] allows the validator to verify the consistency and certainty of blocks by executing SCTs concurrently, which accelerates the speed of block verification, but the corresponding work of the miner was not discussed in depth.

In addition, the concurrency control algorithm has a direct effect on the execution efficiency of SC. The existing Speculative Concurrency Control (SCC) algorithm is not very suitable for the blockchain sharding environment. For example, in [18], an improved SCC-2S algorithm was proposed for uncertain real-time spatial transactions, which aimed to ensure the freshness of data and did not meet the requirement to increase the concurrency. The SCC-NS algorithm introduced in [19] aimed to correct conflicts speculatively using redundant resources, but it also has a huge system overhead problem. Reference [20] provided a detailed comparison and quantitative evaluation of major sharding mechanisms, along with our insights analyzing the features and restrictions of the existing solutions. However, there was no clear description of the sharding mechanism for SCTs and no analysis of the security of efficient concurrent execution of smart contracts in the context of blockchain sharding. Reference [21] proposed a secure and effective construction scheme for blockchain networks, which built a directed acyclic graph (DAG) blockchain network through the network link protocol, and carried the sharding technology on the DAG blockchain to realize the parallel processing of transactions. However, this method did not study the performance improvement within each shard. A comparison of the main contributions between existing work is shown in Table 1.

3. CDO-Module

For clarity, this paper defines transactions on SCs as SCTs. An SCT code is executed once by the miner and multiple times by the validator. For a blockchain system in the context of shard, the maximum concurrency of transaction execution is a very important performance index, where concurrency refers to the number of SCTs executed concurrently. High concurrency can not only improve the utilization of system resources but also maximize transaction throughput. Because the existing SC execution strategies are not optimized for high-concurrency services, this work constructs the Concurrency Degree Optimization Processing Module (CDO-Module). This module contains two subunits: a Feature Information Acquisition Unit (FIA-Unit) and a Classification and Monitoring Unit (CAM-Unit). With the introduction of sharding proposal, all network SCTs are mapped to different shards for processing. To reduce the performance decline caused by excessive SCTs in a single shard, the SCTs after sharding proposal will be preprocessed in the CDO-Module.

3.1. FIA-Unit

To assist the efficient operation of the TSM-Module, this work sets up FIA-Unit. This unit will record statistics of the SC feature information on conflicts in the TSM-Module in real time and use the collected feature information as an important reference factor to resolve SC conflicts. This feature information includes the corresponding SC account address and related member functions with high conflict frequency. A Feature Information Statistics Table (FIS-Table) is generated based on the mined feature information, and the table is maintained by FIA-Unit. FIS-Table records two types of data: Conflicting Contract Account Sets (C-CA Sets) and High-Conflict Rate Member Functions Sets (H-CRMF Sets).

Existing strategies adopt only a type of concurrency control method to resolve conflicts between SCs but do not consider how to make full use of the SC information resources to further improve the concurrency in their execution [22]. The essence of SC is a reusable, immutable, and automatically executed computer program that runs on the network, which cannot be actively executed. Its interaction mode is divided into the external call and internal call [23]; that is, Externally Owned Accounts (EOA) call SC and Contract Accounts (CA) call SC. Correspondingly, FIA-Unit’s statistical analysis of feature information is divided into statistical analysis of C-CA Sets and H-CRMF Sets. When the TSM-Module executes the SC concurrently, it will record the new conflicts and feed them back to FIA-Unit. Among them, the conflicting SC account address is recorded in the C-CA Sets of FIS-Table, and the relevant SC functions with a higher conflict frequency are recorded in the H-CRMF Sets to ensure the continuous update of the statistics in FIS-Table. Figure 1 shows the infrastructure model of each module and unit.

3.2. CAM-Unit

Because the distribution of SCTs has a great impact on the performance of concurrent execution, this work uses the CAM-Unit. This unit divides SCTs into different sets by relevant factors to optimize the concurrency. At the same time, CAM-Unit will limit the number of SCTs executed, which fixes the calculation load, reduces the probability of conflicts, and keeps the concurrency speedup ratio Scon within the ideal range. In the case of n nodes, the Scon expression is shown as follows:where Tser represents the time for serially executing SCTs, and Tcon represents the time for concurrently executing SCTs. Wser represents the load of the serial part, Wcon represents the load of the concurrent part, and is the proportion of the serial part; that is, , .

Subsequent SCTs are preliminarily classified by traversing FIS-Table to determine whether each SC has “feature information.” For reclassification, the estimated execution time Et and conflict rate Cr of SCTs must be comprehensively considered. The complete grouping process is shown in Figure 2.

3.2.1. The Value of Execution Time (Et)

The sharding design scheme can map many SCTs in the network to different shards by category. Therefore, for the value of Et for the SCT, we assume that “similar jobs have similar execution times” and use the Et of the completed SC to predict the Et of similar SCs. Suppose that to estimate the Et of a contract transaction Jsc, the specific steps are as follows:(1)Taking into account the fact that the SCTs are accompanied by Bandwidth Consumption, Storage Consumption, Calculation Consumption, etc., a template is first determined, and the above three consumption factors are regarded as three attribute values: Bandwidth Consumption (Mbps), Storage Consumption (MB), and Calculation Consumption (hash/s) as the constituent elements of the template. In this paper, B, S, and C are used, respectively, that is, {B, S, C}.(2)A maximum of three consumption factors needs to be limited before an SCT Jsc is issued. According to the template {B, S, C}, select SCTs similar to Jsc and form a set .(3)Because the properties of the three attribute values in the template {B, S, C} are different and usually have different dimensions and orders of magnitude, to ensure the reliability of the results, the original data needs to be standardized first; that is, the min-max standardized method is adopted to carry out the linear transformation of the original data and map it to the interval of [0,1], to eliminate the dimensional influence between different dimensions and facilitate the subsequent calculation. The sequences B1, B2, ..., Bn, S1, S2, ..., Sn, C1, C2, ..., Cn are transformed by the following calculation method:(4)In set , use equation (3) to calculate the numerical similarity between and , and M SCs similar to Jsc are selected to form the set . Euclidean Metric is usually used to measure distance. The larger the value, the farther the distance. In this paper, the reciprocal is taken, and the farther the distance is, the closer the reciprocal value is to 0, which indicates that the similarity between SCTs is lower. Sim(, ) based on the three numerical attribute values of B, S, and C is defined as follows:(5)After obtaining a similar set of Jsc, the actual Et of the SCTs in can be used to predict the Et of Jsc. The average method is used in this paper; that is, the average value of Et of the SC in is used as the prediction time of Jsc, and the calculation equation is as follows:where Ri is the actual Et of the ith SC in .

3.2.2. The Value of Conflict Rate (Cr)

The SC conflict rate refers to the probability of an SC conflicting with any other SC when it is executed, and it is ideally judged based on the current conflict situation. Because the SCs cannot be statically analyzed [24], it is impossible to know whether there will be a conflict before the SC is executed, so it is impossible to judge the probability of an SC conflict based on the system status at a certain moment. Considering that the high incidence of conflict is mainly caused by a few popular SCs in a certain period of time, we assume that “the short-term conflict rate of the transaction execution period has a greater impact on the predicted value.” The Cr of an SCT is predicted by the conflict rate of the past period. Besides because the contract conflict rate has nonlinear characteristics, it is not suitable to use the linear regression equation to calculate the contract conflict rate. Therefore, this paper uses the weighted moving average method [25] based on feedback value to predict the Cr of an SCT. The specific steps are as follows:(1)The weighted moving average method is used to calculate the original conflict rate . The weighted moving average method has the characteristics of simple logic and high prediction accuracy. It takes time as the standard and gives the larger weight to the data closer to the prediction time. This can make up for the lack of equal treatment of all data by the moving average method and sensitive response to recent trends in data. The basic calculation method of the original conflict rate is shown in the following equation: represents the n prediction result; Uni represents the conflict rate detected in the ni time period; i represents the number of reference values; and represents the weight value of the i reference value.(2)The weighted moving average method has a high accuracy for short-term prediction, but the weights of the method need to be set in advance and will not change when the weights are determined, so the accuracy of the original Cr cannot be effective feedback. This also limits the accuracy of contract conflict rate prediction. In order to further improve the accuracy of the prediction, we need to calculate the feedback value, so that the results of each prediction can be fed back to the next calculation of the prediction results. The calculation method is shown in the following equation:Fn represents the feedback value of the n time; represents the final prediction value of the n − m time; is the weight value; and Fn greater than 1 predicted value is too large and less than 1 means that the predicted value is too small.(3)We can see from equations (5) and (6) that the calculation of initial prediction value and feedback value Fn are used to calculate the weight value and ; selecting the appropriate weights has a great influence on the final prediction results. Considering that the prediction of Cr has obvious time characteristics, the closer observation value is to the prediction point, the greater the effect is on the result. In this paper, the attenuation factor k is considered to determine the weight value. The initial value of the weight of in equation (7) is 1. The calculation equation of weight isThe initial value of the weight in equation (8) is 1. The calculation equation of weight is(4)The concept of feedback value is based on the calculated value of the weighted moving average method. The final Cr predicted value is equal to the ratio of weighted moving average value to feedback value. The calculation method is as follows:

After calculating the estimated execution time Et and conflict rate Cr of an SCT, it is necessary to consider them comprehensively and judge the threshold value. The calculation equation of the aggregation function is as follows:where α is a parameter; and are the weights of Et and Cr, respectively.

First, by traversing FIS-Table, the SCTs without “feature information” are recorded in Set_δ. Second, the P decision must specify a threshold β. If P ≥ β, the SCT is recorded in Set_λ; otherwise, it is recorded in Set_μ. Finally, the subsequent SCTs are divided into three groups: Set_δ, Set_λ, and Set_μ. The SCTs in Set_δ have the lowest conflict probability and the shortest execution time; the set is given the highest priority when the SCTs are executed, followed by Set_λ. Set_μ has the highest conflict probability and the longest execution time, so the set is given the lowest priority. The SCTs in Set_δ, Set_λ, and Set_μ will be executed concurrently in turn by the SCC-VS algorithm of TSM-Module, while the Set_δ, Set_λ, and Set_μ will be executed serially.

4. TSM-Module

To achieve concurrent execution of the optimized SCTs, this work sets up the Transaction Scheduling Management Module (TSM-Module), which uses an improved SCC algorithm to execute SCs concurrently. The Speculative Concurrency Control algorithm is based on the optimistic method and relies on redundant computing to find a serializable scheduling. This algorithm can reduce the blocking and restart of SCTs while preprocessing the conflicts. According to SCC-NS [26], the number of shadows generated is positively related to the degree of concurrency, but at the cost of a high amount of calculation. Therefore, after balancing the two factors of concurrency and computational cost, a Variable Shadow Speculative Concurrency Control (SCC-VS) algorithm is proposed. The SCC-VS dynamically calculates the number of shadows N required by SCTs from the three aspects of execution time Et, conflict rate Cr, and available resources R, as shown in the following equation:where ϕ is a constant coefficient, e is a constant, Ro is the average amount of idle resources in the system, Cr(Tsc) represents the conflict rate of the contract transaction Tsc, Et(Tsc) represents the execution time of Tsc, and R(Tsc) represents the amount of idle resources available for Tsc execution.

Through the analysis and research of the concurrency problem of SC and the existing work, this paper adopts the two-stage concurrent execution framework of smart contracts. As shown in Figure 3 below, it takes into account the execution efficiency of the main node (miner node in the PoW, leader node in the BFT) at the same time, the playback efficiency of validation node can be guaranteed. Usually, an SCT is executed twice in its full life cycle. The first time main node creates a block; the second time validation node verifies the block. Specifically, the client will first broadcast the SCT to each node. In the first execution stage (the main execution stage), the main node collects a batch of SCTs, then uses the concurrency control algorithm to realize the concurrent execution of the SCTs, then packages the SCTs and records the conflict record in the execution process into the block. Finally, broadcast to the validation node. After receiving the consensus block, the validation node enters the second execution stage (verification stage), uses the conflict record transmitted by the main node to playback the same batch of SCTs, and deterministically calculates a new state transfer, generating the same serializable scheduling as the main node to verify the validity of the block.

The SC concurrent execution strategy based on concurrency degree optimization proposed in this paper consists of two parts. The first part is based on CDO-Module to optimize the concurrency degree of SCTs. The second part is based on the SCC-VS algorithm to execute SCTs concurrently. The complete operating mechanism is shown in Figure 4.

First, in the blockchain sharding environment, each node obtains a set of SCTs from the P2P network, and each transaction is associated with the SC function. Each SC function consists of multiple steps such as lookup, insert, and delete on shared data items. The concurrency of the SCTs is optimized with the help of CDO-Module. After entering the first stage—the main stage—to execute SCs concurrently, the miner node loads the optimized SCTs into an isolated sandbox environment in this stage, uses SCC-VS (see Algorithm 1) to identify conflicts, and allows the main node to record the conflicted relationship and specific conflict data items in real time, thus forming the conflict record. And then, the miner that executes concurrently proposes a block, which is composed of a contract transaction set, conflict records, the final status, the hash value of the previous block, and other information. It is verified by other validators in the P2P network. SCC-VS consists of five rules. The process of SCC-VS algorithm is shown in Algorithm 1.

(A) Start Rule : When the execution of a new transaction is requested, create and execution an Optimistic shadow ;
(1)  Compute the number of shadows ;
(2)  Pessimistic Shadows () ← 0;
(3)  ReadSet () ← φ;
(4)  WriteSet () ← φ;
(B) Read Rule : Whenever a transaction wishes to read object X, a conflict may be found out, then
(1)  ReadSet () ← {X};
(2)  if (Pessimistic Shadows () < N() − 1) then {
(2.1)   Fork a new pessimistic shadow ;
(2.2)    WaitFor () ← {(), X};
(2.3)     Pessimistic Shadow () ← Pessimistic Shadow () + 1};
(2.4)   else if (Pessimistic Shadows () ≮ N() − 1) then{Abort ()};
(C) Write Rule : Whenever a transaction wishes to write object X, a conflict may be found out, then
(1)  WriteSet () ← {X};
(2)  if (Pessimistic Shadows () < N() − 1) then{
(2.1)   Fork a new pessimistic shadow ;
(2.1.1)    WaitFor () ← {(), X};
(2.1.2)     Pessimistic Shadow () ← Pessimistic Shadow(() + 1};
(2.2)   else if (existence conflict) then {
(2.2.1)     Abort the shadow and replace it by a new shadow ;
(2.2.2)     WaitFor () ← {(), X};
(3)  else if (Pessimistic Shadows () = N() − 1) then {Abort };
(D) Blocking Rule : Block a pessimistic shadow at the earliest point at which it wishes to read on object X
(E) Commit Rule: whenever it is decided to commit an optimistic shadow on behalf of a transaction , then
(1)  Abort other pessimistic shadows except ;
(2)  Deal with everything that conflicts with ;

Go later to the second stage—validation stage; the validator verifies the block proposed by the concurrent miner. The concurrent validator analyzes the conflict record in the block to identify the conflict relationship between SCTs. Because all conflicting relationships between SCTs have been identified by the miner, the validator can execute the set of SCTs in a concurrent, deterministic manner with the help of the conflicting records provided by the miner. After successful execution of the SCTs, the validator compares the calculated final state with that given by the concurrent miner. If the final state matches, then it is proved that the block proposed by the concurrent miner is valid. At this point, it is necessary to feedback the new conflict record to CDO-Module to update its maintained FIS-Table. Finally, the block is added to the blockchain and miners are rewarded accordingly.

TSM-Module uses SCC-VS, which will greatly reduce transaction blocking and restart problems. At the same time, it guarantees that the SCTs achieve higher concurrency at a lower computational cost when executed.

Moreover, we also conducted a detailed analysis on the security of the blockchain sharding model used in this paper, aiming at several common attack modes in the blockchain network: Distributed Denial-of-Service Attack, 51% Attack, Empty Block Attack, and Sybil Attack.

The first is Distributed Denial-of-Service (DDoS) attack. It is a special form of denial-of-service attack based on DoS. It is a distributed, coordinated, large-scale attack. By flooding the network with a large number of useless requests, the attacker tries to overload the system, which leads to the fact that the users in the network cannot access the network resources normally and paralyzes the system. DDoS attacks are usually launched by an attacker to take control of an internal platform or to demand a ransom from the injured party. In the sharding model we used, if we want to carry out DDoS attack, we must include all nodes in the blockchain network into the attack scope, which may cause the system to crash. As we add more nodes to the blockchain network, the number of nodes will increase and the attack cost will be too much for the attacker.

The second is Sybil attack, which means that the attacker uses a single node to forge multiple virtual identities and make them exist in the P2P network, to reduce the robustness of the network, interfere with the normal activities of the network, and other purposes. In a blockchain sharding environment, an attacker would also need to create multiple accounts to carry out a Sybil attack. However, the sharding design scheme used in this paper can restrict the validation nodes to a certain extent; that is, the validation nodes need to pledge a certain number of tokens before entering the shard to verify the transaction, which makes it very difficult for the attacker to create a large number of identities in a short time.

Then, there is the 51% attack, which is the most famous type of attack in the blockchain. For example, in the Bitcoin network, once an attacker controls more than 51% of the computing power of the whole network, the attacker can tamper with the historical data in the network and indirectly grasp the right of keeping accounts in the Bitcoin network. In the blockchain sharding environment, the attack method can be understood as more than 51% of the validation nodes in the shard jointly commit crimes, that is, conspiracy attack. However, two conditions must be satisfied for the occurrence of conspiracy attack in shard:(1)The number of malicious nodes in the shard should be greater than 2/3 of the total number of nodes in the shard(2)Malicious nodes should collude together for joint evil

If more than 51% but not more than 2/3 of the validation nodes work together, there will be no consensus, i.e., consensus timeout. The sharding design scheme used in this paper will limit the number of consensus timeouts, which effectively reduces the probability of the occurrence of this type of attack. When consensus timeouts occur several times in a row, we will abandon the transaction and reassigned the transaction, so 51% attack cannot be implemented.

Finally, there is the Empty Block attack, in which miners fill the block head without verifying any transactions to solve the consensus problem as soon as possible, to be able to publish the block faster, and get the block reward during the competitive mining process. Although the empty block attack does not affect the effectiveness of the blockchain, if the frequent occurrence of empty blocks will lead to the continuous accumulation of transaction requests, the transaction memory pool continues to grow, and the average transaction confirmation time becomes longer.

This situation is similar to the 51% attack. We only need to appropriately expand the sharding scale and optimize the performance within a single shard to solve this problem. Moreover, compared with the empty block attack, there is no reason for miners to do so, which exploits the principle of economics.

From the above analysis, it can be seen that the sharding design scheme used in this paper has a certain resistance capability to several common attack modes, which can ensure the normal operation of the blockchain sharding system using the strategy proposed in this paper.

5. Experiments

In this section, the performance of the proposed SC concurrent execution strategy based on concurrency degree optimization is verified by experiments. Since the existing SC models of blockchain are all single-threaded models (such as Ethereum’s EVM) [27], the SC concurrent execution strategy proposed in this paper is difficult to be implemented in the real blockchain system. As a result, this experiment completes all performance reviews on a server and uses Java language to simulate real smart contract execution [28, 29]. The load generator implemented experimentally in this section synthetically considers the number of SCTs and accounts for generating the corresponding load for each set of experiments. Distribution of transaction types adopts random classes to achieve uniform distribution. The data access mode accords with the Zipfian distribution to simulate the SCTs’ conflict scenario. Specifically, the larger the parameter, the higher the conflict rate between the SCTs. The server configuration required to run the experiment is shown in Table 2.

Because this paper focuses on the concurrent execution of SCs, it simplifies the POW and other related factors in the process. The experiment mainly focuses on the following aspects of performance: (1) when the SCTs increases, the comparison of acceleration changes in each method; (2) when the conflict rate increases, the comparison of acceleration changes in each method; (3) when the number of nodes increases, the comparison of throughput changes in each method; (4) as the number of shards increases, the throughput of individual shard and the whole system changes; (5) as the number of shards increases, the storage overhead of the nodes changes; (6) the conflict records in FIS-Table change as the number of SCTs increases; (7) the Cumulative Distribution Function (CDF) [30] of the estimated execution time and actual execution time of SC is achieved; and (8) the throughput result of the security experiment. All experimental results are the average values taken after multiple executions to reduce errors.

The experiment compares the smart contract concurrent execution strategy proposed in this paper with the other two traditional concurrency control algorithms and uses the results of serial execution as a baseline to simulate the average acceleration in each method. By analyzing the experimental results in Figure 5, it can be seen that when the transaction flow is low, the use of the Lock algorithm relatively does not bring about acceleration or even a slowdown. This is because the additional overhead caused by conflict processing has an impact on concurrent performance. With the continuous increase in transaction flow on the platform, the Lock algorithm starts to slow down similarly to the BTO algorithm after a period of acceleration. Based on the optimization of concurrency and the improvement in transaction blocking and restarting, the proposed strategy can mitigate the performance degradation caused by increased transaction flow.

From the analysis of the experimental results in Figure 6, it can be seen that with increasing conflict rate, the acceleration caused by the three methods shows a downward trend. When the conflict rate is close to 68%, using the BTO algorithm to execute SCs is slower than serial execution. In contrast, the Lock algorithm, which uses the pessimistic method as an example, is more adaptable to situations with a higher conflict rate. However, due to the increased conflict rate, the probability of cross-shard interaction is also increasing, and the complexity is becoming increasingly higher [31], so the overall trend is also declining, but the overall implementation is still slightly better than the above two ways.

According to the experimental results shown in Figure 7, strategies proposed in this paper can be compatible with sharding proposals and still maintain the characteristic of linear scalability; that is, with increasing number of nodes and network volume, processing performance can be improved by parallelizing the data flow. Under the traditional method, even if SCs are executed concurrently, as more nodes are added, their trading speed will still decrease.

Under the cooperation of CDO-Module and TSM-Module, the strategy proposed in this paper improves the SCTs’ execution efficiency and the whole system’s throughput by optimizing the performance of each shard. We compare it with the traditional sharding blockchain, taking Elastico public blockchain as an example. Elastico first proposed to adopt the sharding model in the public blockchain system [32], which almost completes the linear expansion of the throughput of the block. Analyzing the experimental results in Figure 8, under the traditional method, although the network-wide TPS increases as the number of shards increase, the TPS of a single shard is still low, only approximately 50, and there is no significant performance improvement. While the strategy proposed in this paper guarantees the performance improvement of a single shard, the throughput of the whole system also reaches a high level.

For the overhead of sharding storage, we calculate the amount of data stored in each shard. Because cross-shard SCTs will be stored by multiple shards, we sent 5%, 10%, 15%, and 20% cross-shard SCTs in this experiment. Analysis of the experimental results in Figure 9 shows that the storage overhead of each node decreases as the total number of shards increases. In the case of the same number of shards, the more cross-shard SCTs, the greater storage overhead. Furthermore, we note that storage optimization mechanisms can be used to further reduce storage overhead.

We use the SCC-VS algorithm to implement concurrent execution of SCs in the TSM-Module. At the same time, the miner will propose a new block, which consists of information such as the set of SCTs, the conflict record, the final state, and the hash value of the previous block. TSM-Module feeds back the feature information in the conflict record to the FIA-Unit, the purpose of which is to achieve the preprocessing of SCTs. Figure 10 shows the relationship between the number of SCTs and conflict records of four types of SCs (i.e., a, b, c, and d) under different concurrent control algorithms. By analyzing the experimental results shown in Figure 10, we can see that no matter what method is adopted, the conflict records in FIS-Table will increase with increasing SCTs. However, the SCC-VS algorithm proposed in this paper shows better performance due to its application in a single shard, which diverts many SCTs into different shards and optimizes the degree of concurrency between SCTs.

Currently, because of the small number of SCTs in the block, the storage of conflict records between SCTs in the block will not consume too much space. Therefore, storing conflict records of SCTs in a block does not consume much space. Over time, if conflict records increase, more storage space will be consumed. Hence, it is important to provide the best conflict record or to properly implement concurrent execution of SCs without a conflicting record.

In the CAM-Unit, Et for the SC must be predicted. To verify the effectiveness of the method, the Cumulative Distribution Function (CDF) of the estimated-Et and actual Et of four different SCs are given in Figure 11. This graph describes the probability that the estimated-Et and actual Et of four different types of SCs fall within any time interval. Through the execution of four SCs, we find that the actual runtime corresponding to the same probability density is slightly smaller than the Et we calculated. Causes of this situation, on the one hand, due to the decreasing execution time, on the other hand, because we generally overestimate the runtime of the four SCs during this experiment. It can be seen in Figure 11 that the actual Et distribution of the four SCs is smooth, while the estimated-Et is ladder-shaped, which shows that the estimated-Et calculated by this method is relatively rough. Therefore, for the prediction algorithm in this paper, it is easier to obtain a good prediction effect (e.g., b and d type contracts) for SCTs with a ladder-shaped actual Et distribution.

In order to evaluate the resistance of the blockchain sharding system using the concurrent execution strategy proposed in this paper to malicious nodes, we set up 60 nodes, among which 15 nodes are malicious nodes and the other 45 nodes are honest nodes, as shown in Figure 12. The honest node confirms all transactions it receives, while the malicious node stops verifying transactions and is rejected as the master node each time it is elected. In this way, we test the security of the scheme in this paper to verify its resistance to malicious nodes in the long run.

Figure 13 is the throughput result of the security experiment. After calculation, it can be seen that the average transaction throughput of the blockchain system using traditional random sharding technology is about 412.3 TPS. The average transaction throughput of the blockchain sharding system using the strategy proposed in this paper is about 525.1 TPS, higher than that of the traditional scheme. The reason is that the malicious node stops validating the transaction and is rejected as the master node each time it is elected. At this point, the other nodes in the shard broadcast the emergency message and start the rollback program and then select a new master node. In this process, the shard stops working and the transaction cannot be verified before the new master node is elected, resulting in a rapid decline in throughput. However, overall, the overall average throughput of the strategy proposed in this paper is still better than that of the traditional scheme.

6. Conclusions

In this paper, we propose a smart contract concurrent execution strategy based on concurrency degree optimization. Firstly, the CDO-Module is used to collect the feature information of conflicting SCs and carry out the concurrency degree optimization processing for the subsequent SCTs. Secondly, through the TSM-Module, the proposed SCC-VS algorithm is used to execute the SCTs after optimization. The experimental results show that the strategy ensures the execution of SCs in single shard with a higher concurrency degree, and the performance within each shard is further improved, so that the whole blockchain sharding system can meet higher transaction throughput.

Data Availability

The data used to support the findings of this study are included in this article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported in part by the National Key R&D Program of China (2019YFB1406002), in part by the National Natural Science Foundation of China (61903356), and in part by Key Scientific Research Projects of Liaoning Provincial Department of Education (LZD202002).