Abstract

Mobile edge computing (MEC) nodes are deployed at positions close to users to address excessive latency and converging flows. Nevertheless, the distributed deployment of MEC nodes and offload of computational tasks among several nodes consume additional energy. Accordingly, how to reduce the energy consumption of edge computing networks while satisfying latency and quality of service (QoS) demands has become an important challenge that hinders the application of MEC. This paper built a local-edge-cloud edge computing network and proposes a multinode collaborative computing offloading algorithm. It can be applied to smart homes, realize the development of green channels, and support local users of Internet of Things (IoT) to decompose computational tasks and offload them to multiple MEC or cloud nodes. The simulation analysis reveals that the new local-edge-cloud edge computing offload method not only reduces network energy consumption more effectively compared with traditional computing offload methods but also ensures the implementation of more data samples.

1. Introduction

With the continuous development of the Internet of Things (IoT) technology in recent years, IoT network equipment has developed perception and communication abilities, and the user end of the network can extend to information exchange and communication between any goods in daily life [1]. IoT technology has also been used in various aspects of industrial production and daily life. In transportation and network performance optimization [2], IoT has been used in smart homes, smart industries, and smart cities, among others. Previous studies have largely focused on the application scene of smart homes. The local user ends of IoT in smart homes can take the form of any good. Therefore, IoT contains diversified user data, whereas intelligent electrical apparatus requires a rapid and effective processing of task data [3]. In this case, a fast, efficient, and safe task processing mode needs to be devised to meet the demands of users with a large data size or high sensitivity to latency. Given that the traditional single-cloud model cannot meet such demands, the concept of mobile edge computing (MEC) has been proposed based on cloud computing [4]. MEC is a new computing model, and MEC nodes are widely distributed in the vicinity of the client to provide intelligent services for local users. Edge nodes can be installed on the edge server (e.g., vehicles and UAV) to meet the linkage demands of different users [5]. Combined with MEC, a multinode cooperation of data tasks is realized by transmitting data between the local users of IoT and MEC nodes wherein the local user data of IoT are offloaded to nearby MEC servers, thereby addressing the limited computing capability of these users and reducing their computing task pressures. However, MEC nodes have a limited computing capacity, thereby requiring a cooperation among multiple MEC nodes to handle computing tasks with a large data size.

To solve the network energy consumption problem under a large data size at the user ends of IoT, this study initially analyzes and selects MEC nodes in a local-edge-cloud edge computing network model while considering the distances between the MEC nodes and user ends, the channel characteristics, and the CPU energy consumption.

The main contributions can be summarized as follows:(1)The local-edge-cloud edge computing network model proposed in this paper supports the local user ends of IoT in their parallel offloading of a computing task to multiple MEC nodes or a cloud. This study takes both network computation and transmission into account, considered from the three layers of local, edge, and cloud(2)Latency cannot be directly accumulated due to the parallel data transmission. Instead, the time for receiving and processing data at different nodes is analyzed to determine the network latency. An integral linear programming problem that targets the optimization of network energy consumption is formulated, and single-user task offloading is analyzed by using the branch-and-bound (BB) algorithm to minimize the overall network energy consumption(3)The simulation results show that the demands for MEC nodes increase along with the size of offload data at the local user ends of IoT. Moreover, the multinode collaborative model is significantly superior over the traditional computing offloading algorithm in terms of energy consumption and latency, especially under large offload data sizes

The rest of this study is organized as follows. In Section 2, related work is introduced. Section 3 introduces the proposed model. Section 4 discusses in detail the construction of an objective function for the multinode computing offload model and the BB algorithm used in the optimization. Section 5 analyzes the simulation results. Section 6 concludes the paper.

The local user ends of IoT can offload computing tasks to MEC nodes via global and partial offloading. In global offloading, the entire computing task is offloaded to an MEC node. Liu et al. [6] used the 1D searching algorithm to reduce implementation latency to the maximum extent and gave comprehensive considerations to the queuing state in the application buffer zone and the available processing capacity. However, edge nodes have inadequate computing capacities and experience long transmission latency. To address this problem, this paper proposes a partial offload method that implements parts of the computing task at the local position and offloads the other parts to the MEC for implementation. Further details on partial offloading can be found in [7]. In a partial offload program, the distribution positions of data tasks need to be determined; tasks are successively transferred to each node to execute tasks after the user is partitioned. In [8], Yang et al. proposed the concept of task zoning, which determines offload modules and implementation methods, that is, whether the tasks are implemented at local positions or offloaded to MEC and cloud nodes. Meanwhile, Zhao et al. [9] transformed the partial offloading problem into a nonlinear constraint problem and adopted a linear programming approach to solve this problem and realize the goal of optimal processing. Given their diversity, network data of different sizes are generated. Accordingly, resource limits have become key problems in the offload process that have been discussed in [911]. For instance, Zhao [10] analyzed resource limits from the perspectives of network capacity and data allocation, chose an appropriate position for data processing, and guaranteed the smooth implementation of additional data tasks. In [11], a data task was segmented by employing a partial offload method, and this task was transmitted successively to MEC and cloud nodes for implementation, thereby overcoming resource limits. To address the limitations in node quantity and processing ability, You and Huang [12] proposed an optimal resource allocation strategy for a time division multiple access system to process the queuing of tasks and ensure resource processing efficiency. Aiming at the complex resource allocation problem, Ref. [13] proposed an intelligent resource allocation framework to solve the complex resource allocation problem of collaborative mobile edge computing network. The resource allocation scheme was determined according to the edge computing server’s computing capacity, channel quality, resource utilization, and latency constraints.

When users have a large number of computing tasks, a single MEC node cannot meet the demand of processing offload tasks from the user end even if the partial offload method is applied. As a result, several nodes must be selected in the collaborative processing of offload tasks. Fan et al. [14] adopted a multinode collaboration method that allows nearby MEC nodes to share the computing pressure of the target node when the computing task at the user ends is too large for a single MEC node. They also designed an algorithm for solving the optimization problem by using an interior point method and a logarithmic potential barrier function to optimize the energy consumption problem of the multinode collaboration system. This multinode collaboration method is mainly used to address the inadequate computing capacity of single nodes. Based on a dynamic and self-configuring multiequipment mobile cloud system, Habak et al. [15] implemented relevant computing tasks and expanded the range of the cloud system by using the surrounding vacant mobile equipment as MEC servers with an aim to solve the problem where the network load exceeds the computing capacity of nodes. In a multinode collaboration method, the computing task should be allocated to multiple nodes, but this action involves the allocation and deployment of nodes. Reference [16] considers link selection in collaborative networks. Based on the characteristics of two branches in the system, the buffer-assisted relay combination technology is used to provide accurate expression of interrupt probability for the common channel interference network to evaluate the transmission performance of the network. In [17], the authors selected the deployment positions of MEC nodes, such as LTE micro sites and gathering stations of multiwireless access technology communities. With the continuous popularization of MEC technology, multinode collaborative technology has been increasingly used in practice.

In the above studies, users offload the computing tasks completely or partially to one or several MEC nodes, optimize the network structure, increase the task processing capability of the network, and explore resource optimization in a multinode collaborative network structure. Nevertheless, MEC nodes are extensively distributed in ranges of local user nodes, and several MEC nodes in a wireless network are selected to participate in computing. Involving more nodes in a network will increase its overall energy consumption. After the introduction of the green MEC philosophy, network energy consumption has become a key concern among researchers.

In the multinode task allocation model, MEC nodes that implement the computing tasks are chosen reasonably to reduce network energy consumption. Zhang et al. [18] applied a single-user mobile edge computing offload (MECO) approach to the MEC network model, where network energy consumption is treated as the optimization goal, and the appropriate offload strategy is determined by comprehensively changing the number of CPU periods and network transmission rate. However, this study only considers the single-user MECO model. Meanwhile, the authors in [19] fully considered energy consumption and latency of end users in the multiuser MECO distributed computing offload model and realized an optimal allocation of resources in the computing offload process by using game theory. Reference [20] constructs an intelligent edge computing network based on pricing. When the user is offloading data, latency and price are taken as performance indicators, stochastic game method is used to determine the user signal processing scheme, and offloading strategy is designed to reduce latency and price. In [21], to cope with energy shortage in a heterogeneous network, a shared link was established among multiple base stations (BS) and was extended to the macro and micro domains for analysis. At the same time, in the heterogeneous network, due to the complex distribution of base stations and users, multilayer switching and power distribution need to be considered. In Ref. [22], there is the switching and power distribution problem in the two-layer heterogeneous network composed of macro station and millimeter wave. A multiagent augmented learning algorithm based on the proximal policy optimization is developed to realize the interaction between multiuser devices. Ng et al. [23] proposed an offload priority function by considering quantitative equality, transmission channel, and local computing situations. By analyzing this offload priority function, the optimal network resource allocation was realized, and the overall network energy consumption was used as the measurement index.

In sum, many studies have examined multinode collaboration and data offloading. Users transmit data to multiple nodes in a step-by-step manner before their implementation. When the data size at the user ends is relatively large, then the step-by-step transmission leads to significant latency, thereby destroying the latency constraints of users and consuming a considerable amount of network energy. On this basis, the superiority of the model created in this paper is more prominent.

3. System Model

Figure 1 illustrates a local-edge-cloud edge computing network that has local user ends of IoT served by wireless eNodeBs. Each eNodeB is equipped with one MEC server or MEC nodes. The computing task from the local user ends of IoT can be implemented in site, partially offloaded to the MEC nodes, or partially transmitted to the cloud server through the routers at eNodeB. Before offloading tasks, the local user ends of the IoT segment these tasks while following certain rules, and the segments choose the appropriate MEC nodes or cloud servers for task offloading based on the latency, energy consumption, computing capacity of MEC nodes, and other parameters. In Ref. [6], the sequential transmission of segmented task blocks will cause a certain latency waste. Based on the above, this paper makes improvements by transferring the segmented task block to the appropriate node to perform tasks synchronously, determining the optimal assignment location of the task at the user end, transmitting and processing the task at the same time, and processing more data under the same latency constraint. Without loss of generality, this study hypothesizes that the computing task of local user at a moment can be segmented into task blocks. Task block 1 can be implemented at the local user ends of IoT. Offloading to and implementing at nodes , , and are optional for task block 2, task blocks 3, 4, and 5, and task blocks 6 and 7, respectively. Given that the computing capacity of MEC nodes cannot meet the demands of residual task blocks, these blocks are transmitted to the cloud for implementation. A parallel offloading of multiple task blocks is applied to reduce the network latency and overall network energy consumption.

3.1. Network Energy Consumption

In studying the local-edge-cloud edge computing network model, the computing and transmission capacity of the network should be considered to minimize the network energy consumption because the data from the local user ends of IoT are offloaded simultaneously and implemented at multiple nodes. Therefore, “network energy consumption” in this paper includes the energy consumed for the parallel transmission of computing tasks from the local user ends to the MEC and cloud nodes and the energy consumed for transmitting a computing task from the local user ends to different nodes. The computing model of the local-edge-cloud edge computing network is defined as (, ), where is the task value of user () and is the time spent by user in executing the task. The computing energy consumption of user can be expressed aswhere is number of CPU turns needed to execute a computing task per bit of data and is the energy consumed for each CPU turn.

When the computing task cannot be executed completely at the local user ends of IoT, this task must be offloaded to the appropriate nodes, which will consume a certain amount of transmission energy. Transmission energy consumption is related to both the transmission time and transmission power of the task. The transmission energy can be formulated aswhere is the transmission time of the computing task of user and is the transmission power between user and the offload nodes. The overall energy consumed by user to execute a task is computed as the sum of transmission energy consumption and computing energy consumption:

3.2. Computing Capacity

The number of CPU turns needed for user to implement 1 bit of task at local users, MEC nodes, and cloud nodes is denoted by , respectively. Meanwhile, the energy consumed for each CPU turn in implementing the computing task of user at local users, MEC nodes, and cloud nodes is denoted by . Under the multinode collaboration mode, the data are segmented at the local user ends of IoT, and the segmented data are transmitted to the MEC or MCC nodes for computing. To easily observe the offload condition of segmented tasks, one data unit (kbit) is set, and the data at the local user ends of IoT are expressed as data units. The data of user are divided into data units as . For all nodes, parameter is set, where denotes the number of data units in the local computing of user . The network has MEC nodes, where . and refer to the number of data units that local user offloads to and the cloud nodes for task execution, respectively. With respect to the selection problem between the local user ends of IoT and MEC nodes, parameter indicates that the computing task block of local user is offloaded and implemented at node . In this model, the local user ends of IoT segment the computing task into several blocks and offload them to multiple MEC and cloud nodes. A data unit can only be offloaded to a single node (), while one MEC node can receive several data units (). When , the computing task is implemented at the local user ends of IoT, but when , the computing task is implemented at cloud nodes.

Given that the data are segmented at the local user ends of IoT and transmitted to several nodes simultaneously, the data allocated to different nodes should meet the computing capacities of different nodes. The data of user are analyzed as

Let be the computing capacities of the local user ends, MEC nodes, and cloud nodes, that is, the number of CPU turns needed to implement the computing task. In equation (4), , where denotes the size of the task implemented at the local user ends of IoT, refers to the size of the task implemented at the MEC nodes, and refers to the size of the task implemented at the cloud nodes.

3.3. Computing Latency

Computing latency is determined by computing the number of nodes, number of CPU turns, and node computing capacity. When the computing task is executed at the local user ends of IoT, the data computing latency of user can be expressed as

Given that the data are segmented at the local user ends of IoT and are transmitted to several MEC nodes simultaneously for implementation, the computing latency is computed as the maximum computing latency of different nodes. The computing latency of one node can be formulated as

When the computing task cannot be implemented at the local user ends of IoT and MEC nodes, this task should be transmitted to cloud servers. The computing latency at the cloud nodes can be formulated as

3.4. Transmission

The transmission links in a network refer to the wireless communication links between the MEC server and UE, the transmission VLAN among MEC servers, and the transmission links between the MEC and cloud servers. In the network transmission process, the relationship between network computing capacity and transmission capacity should be considered. If the computing capacity is too high, then the channel resources in the network cannot be allocated to the local user ends of IoT, thereby congesting the channels and increasing network latency. Let (bit) be the data size that local user of IoT needs to process. Specifically, refers to the size of the computing task implemented at the local user ends of IoT, is the size of the computing task implemented at the MEC nodes, and is the size of the computing task implemented at the cloud nodes. When the computing task can be implemented at the local user ends of IoT and does not need to be transmitted, no transmission energy is consumed. Transmission energy is only consumed when the computing task is offloaded to the MEC and cloud nodes.

Let denote the transmission time for one data unit (kbit), where . Therefore,where refers to the data transmission rate from user to the chosen nodes. The total transmission time in the computing offload process is calculated by the number of bit units that the user offloads to nodes . Suppose that MEC servers receive data from the user end. These data are segmented at the local user ends of IoT, and data transmission is performed simultaneously. However, computing tasks will experience transmission time in the task transmission process of each part. Given that each node has unique basic parameters, the size of the offloaded data also varies. The transmission time from the local user ends of IoT to the nodes shall be taken as the transmission time from the local user ends to the node with the largest offloaded task. This node should meetwhere represents the latency in meeting the QoS demands of users.

3.5. Transmission Power

In the transmission from local user to the chosen nodes, the transmission rate can be expressed aswhere is the channel bandwidth, is the transmission power between local user and node , and is the channel characteristics between local user and node . The differences in the channel characteristics can be ascribed to the variances in the distances of each node from the local user . The value of meets the large-scaled attenuation characteristic and is related to transmission distance. is expressed as , where is the transmission distance, is the reference distance, is the route loss index, and is a Gaussian random variable with a 0 mean and standard deviation. Meanwhile, represent the bandwidths between the local users of IoT and edge nodes and those between the users and cloud nodes. When , and represent the transmission power and loss between user and . When , these parameters represent the transmission power and loss between local user and the cloud nodes.

According to equations (1) and (3), data transmission rate () can be expressed in two ways. The transmission power from the local user to node can be expressed as

In sum, to analyze the transmission in the local-edge-cloud edge computing network and computing situations, network energy consumption can be computed as . Computing energy consumption includes the computing energy consumed by the local user ends of IoT and by the collaboration between MEC nodes and cloud servers. Meanwhile, transmission energy consumption includes the wireless transmission energy consumption between the local user ends of IoT and MEC nodes and that between the local user ends and cloud servers. Network latency, which includes computing latency and transmission latency, is considered in computing network energy consumption given that the network energy consumption should be minimized under the premise of meeting network latency requirements.

4. Multinode Collaborative Computing Offloading Algorithm

In the local-edge-cloud edge computing network model, one part of the computing task is implemented at the local user ends of IoT, whereas the other parts are offloaded to the appropriate nodes. The data of the user are segmented following certain rules and are offloaded simultaneously to several nodes. Given that MEC nodes are close to the local user ends of IoT, a short data transmission time is achieved. However, the offload positions should be chosen reasonably based on the user demand given the limited computing capacity of MEC nodes.

4.1. Establishment of an Objective Function

According to equation (3), the overall network energy consumption includes computing and transmission energy consumption. In the local-edge-cloud edge computing network model, certain tasks are distributed to all levels. In other words, network energy consumption includes the computing and transmission energy consumption of the local user ends of IoT, MEC nodes, and cloud nodes.

Implementing the computing task at the local user ends of IoT only consumes computing energy. The energy consumed can be formulated as

When the computing task is offloaded to edge nodes, several MEC nodes surround the local user ends of IoT. Therefore, the appropriate MEC nodes should be selected. Let the selection parameter be , , which reflects the selection of MEC nodes. The overall energy consumption of the MEC node includes both computing and transmission energy consumption and can be expressed as

When the computing task is partially offloaded to the cloud servers, the overall energy consumption of cloud nodes can be expressed as

The overall network energy consumption is then computed as the total energy consumed by the local user ends of IoT, MEC nodes, and cloud nodes:

Given that the network model considers the computing and transmission of data from the local user ends of IoT, network latency includes both computing and transmission latencies. The computing latency of the local user ends of IoT, MEC nodes, and cloud nodes should be considered when applying a local-edge-cloud edge computing network model. The computing latency of local user can be expressed as

Unlike in the mutual transmission computing offload model, the computing task is segmented at the local user ends of IoT and are transmitted simultaneously to multiple nodes for processing. Therefore, the computing latency is taken as the maximum computing latency of MEC and cloud nodes:

The overall computing latency of the network is then formulated as

Meanwhile, transmission latency mainly involves the wireless transmission links from the local user ends of IoT to the MEC nodes and the VLAN transmission network from the local user ends of IoT to the cloud nodes. Given that the data are segmented at the local user ends of IoT, the appropriate nodes should be selected for the simultaneous transmission of segmented data. When parallel data transmission is applied, the overall transmission latency of the network can be expressed as

The network latency is then computed as the sum of computing latency and transmission latency:

The goal of this multinode collaborative computing offload model is to minimize the overall network energy consumption while meeting the time constraints. The optimization system of the multinode collaborative computing offload network iswhere (22) and (23) are the limiting conditions of transmission time (with (22) indicating that the network latency is smaller than the latency limit of user ends), (24) and (25) denote the data allocation after the segmentation at the local user ends (with (24) indicating that only one data unit can be offloaded to one MEC node and (25) indicating the number of unit tasks that can be processed by a single MEC node), and (26) to (28) denote the computing capacity limitations of MEC nodes, local user ends of IoT, MEC nodes, and cloud nodes.

To address the above conditions, the size of the computing task blocks offloaded to different nodes in equation (21) is denoted by . The task allocation of nodes under optimal energy consumption is evaluated by analyzing the value of . Given that determines the number of data units, its value can only be expressed as an integer. Therefore, the optimization problem becomes an integer programming problem.

4.2. Optimization Based on the BB Algorithm

The resource allocation scheme for MEC nodes is determined by using the BB algorithm, which searches all feasible solution spaces for the optimization problem with constraints. During the implementation of this algorithm, all feasible solution spaces are continuously divided into smaller subsets, and a lower or upper bound is calculated as a solution for each subset. With respect to the integer programming problem, the BB algorithm solves the ordinary linear programming problem through simplex and divides the nonintegral decision variables into two proximate integers. The conditions are then listed and added into the original problem. Meanwhile, the constraint vector after updating is solved, from which the upper or lower bound of the numerical value is identified.

In using the BB algorithm to solve the energy consumption optimization problem, equation (21) is taken as the objective function with as the independent variable. This objective function can be viewed as a linear programming problem that is expressed by its independent variable. The independent variable meets

Equation (21) can then be expressed aswhere is the coefficient before , and the coefficient vector of independent variables in the objective function can be expressed by . The constraint condition (1) for latency in equation (21) is then transformed aswhere is the coefficient before in the constraint condition equation (22). Constraints (5) to (7) in equation (21) can then be transformed into

These equations transform the constraints in equation (21) into a standard form of the independent variable . Letwhere refers to the constraint matrix formed by this constraint equation set. Let, where refers to the right vector of this constraint equation set. The value ranges of independent variable can be expressed by constraints (6) to (8) of the objective function, which are denoted by and .

The basic process of the BB algorithm is shown in Table 1.

Since the BB algorithm searches the solution space in a breadth-first way, the original problem is divided into multiple branches to search for the optimal solution at the same time, eliminating a large number of nodes that have no chance to become the best value.

A local-edge-cloud edge computing network has UE and MEC nodes, the data of each UE is divided into task blocks, and it is necessary to determine the allocation strategy of the UE task blocks and the offloading node of the partitioned data. The time complexity of the UE task block allocation process is determined to be . Since multitask blocks are transmitted at the same time, there is no need for sorting by the new allocation strategy, and the optimal solution can be directly searched for the data offloading node. The computational complexity of this process is , and the sum of the two is the overall computational complexity of the BB algorithm .

5. Simulation Results

The multinode computing offloading algorithm proposed in Section 3 is compared with the traditional single cloud offload and multinode mutual transmission computing offloading algorithms. These algorithms are compared under different data sizes with overall network energy consumption as the measurement standard. For the multinode collaborative offload model, three MEC nodes are set, and represent the number of user data units at the local user ends of IoT, the three MEC nodes, and the cloud nodes, respectively. The three MEC nodes correspond to different CPU parameters, and their distances from the local user ends of IoT are denoted by , respectively, assuming that the network transmission bandwidth meets user demand. The data processing situation at one local user end of IoT is initially analyzed to compare the network energy consumption of the models.

The assumption is that the network bandwidth is large enough to meet user needs; regardless of the limitation of transmission bandwidth, the effect of data transmission rate on network energy consumption is considered. Following , the data transmission rates in the three cases are shown in Table 2.

The basic parameters used in the simulation are listed in Table 3.

5.1. Energy Consumption

The computing data size is set to to analyze the network energy consumption of the three computing offload models. Figure 2 presents the results.

Figure 2 shows that the network energy consumption of the multinode collaborative computing offload model is lower than that of the other two models. Specifically, when the offload data size at the local user ends of IoT is smaller than 1500 kbit, the network energy consumption of the multinode collaborative computing offload model, which involves parallel data transmission, is equal to that of the multinode mutual transmission computing offload model. Otherwise, the network energy consumption of the multinode collaborative computing offload model is lower than that of the multinode mutual transmission computing offload model. The network optimization effect of the proposed model is similar to that of the multinode mutual transmission computing offload model when the offload data size is small. However, the proposed model shows some advantages in network latency that can be attributed to its parallel transmission of computing tasks. Meanwhile, when the offload data size of the network is large, the proposed model significantly outperforms the other two models in terms of network energy consumption and network latency.

The allocations of offload data size among nodes within the range of 1000 kbit to 5000 kbit are shown in Figure 3. The number of nodes for resource allocation gradually increases along with the computing offload data size. When the data size at the local user ends of IoT is not too large, the data can be processed between the local end users and MEC servers and do not need to be offloaded to cloud nodes for execution. A higher number of tasks for processing correspond to higher node number requirements. The proposed algorithm outperforms the other two models when the task data size at the local user ends of IoT is larger and is thereby conducive to optimizing the network.

When the network bandwidth is changed, the effects of information transmission rate on network energy consumption should be considered.

Figure 4 shows that the lowest network energy consumption is achieved under case 3, whereas the lowest and highest transmission rates are observed under cases 1 and 3, respectively. The overall network energy consumption is negatively correlated with network transmission rate. Given that all computing tasks are transmitted simultaneously in the proposed multinode collaborative computing offload model, a higher transmission rate leads to a larger data size for simultaneous transmission and a higher offload quantity at the local user ends of IoT. The simulation results reveal that the total data sizes under cases 1 to 3 are 5000, 7500, and 17500 kbit, respectively. In sum, the overall data size that the network can process increases along with the network transmission rate. At the same computing data size, the network energy consumption decreases along with an increasing transmission rate.

When the data size for processing at the local user ends of IoT is very large, the overall data transmission rate in the network should be increased. Specifically, when the task data size in the computing network ranges from 10000 kbit to 50000 kbit, the data transmission rate should be increased to 2 Gbit/s. The task allocation among nodes is shown in Figure 5.

The data transmission rate in the network increases when the data size at the local user ends of IoT is very large. Figure 5 shows that the size of data offloaded to node significantly increases along with the size of data at the local user ends given the low CPU energy consumption recorded at . When the computing task is very large, the CPU energy consumption becomes a main influencing factor for network energy consumption. The total data size offloaded to the cloud nodes continuously increases along with offload data size, thereby highlighting the superiority of the edge-cloud cooperation mechanism under a large data size.

6. Conclusions

To realize green communication in smart homes, a multinode collaborative computing offload model is proposed in this paper. In this model, the local user ends of IoT segment the computing task following certain rules. Afterward, the segmented data are reasonably distributed and simultaneously transmitted to multiple nodes for implementation. The traditional single-cloud computing offload model and multinode mutual transmission computing offload model are analyzed on this basis. By treating the overall network energy consumption as the optimization goal and latency as the optimization condition, the allocation of resources among MEC nodes is determined by using a BB algorithm. The proposed model is also compared with the two aforementioned traditional models. Under a large offload task size, the proposed multinode collaborative computing offload model achieves the lowest network energy consumption and the best latency characteristics among all models. The CPU parameters of the MEC nodes greatly influence the network energy consumption. Under a large data size, the multi-MEC node and edge-cloud collaborative model show improved network characteristics. Meanwhile, both network bandwidth and information transmission rate can influence the data offload performance of the network to some extent. In a multinode collaborative computing offload model, a parallel transmission of segmented data tasks is applied to process large computing tasks at a low overall network energy consumption and high data transmission rate.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

We declare that there is no conflict of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 61771195), the Natural Science Foundation of Hebei Province (No. F2018502047), and the Fundamental Research Funds for the Central Universities (No. 2020MS098).