当前位置: X-MOL 学术J. Netw. Comput. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Cost-aware automatic scaling and workload-aware replica management for edge-cloud environment
Journal of Network and Computer Applications ( IF 8.7 ) Pub Date : 2021-02-17 , DOI: 10.1016/j.jnca.2021.103017
Chunlin Li , Jun Liu , Bo Lu , Youlong Luo

As the mobile edge computing develops continually, many emerging and diversified edge applications, such as Internet of vehicles, virtual reality games and lightweight deep learning tasks, are emerged. These applications are latency sensitive and require a large number of network connections. As the data size and business requests processed per unit of time continue to increase, the pressure of edge cloud load increases so much that the edge cloud cannot provide timely and effective services for users. In order to meet the high requirement of delay sensitive application service, on the one hand, the energy consumption of edge devices should be reduced, and the modules with high computational load should be scheduled to the remote server for execution. On the other hand, the automatic scaling model is proposed to improve the total cost of the tenanted instances. In the dynamic replica management model, the response time and the energy consumption are reduced, moreover, the workload in the hosts are balanced. In the experiment of automatic scaling, the cumulative total cost and the power consumption of our proposed algorithm are lower than that of CAAS and MLC algorithm. Two hours after the experiment, the CPU utility of our proposed algorithm is higher than the CPU utilizations of CAAS and MLC. In the dynamic replica placement experiment, the average response time and load balancing value of the data migration algorithm in our paper are lower than that of BA and LA. The available storage space of our replica placement method is more balanced than both the DDD and WA.



中文翻译:

用于边缘云环境的具有成本意识的自动扩展和具有工作负载意识的副本管理

随着移动边缘计算的不断发展,涌现了许多新兴且多样化的边缘应用程序,例如车辆互联网,虚拟现实游戏和轻量级深度学习任务。这些应用程序对延迟敏感,并且需要大量的网络连接。随着每单位时间处理的数据大小和业务请求的不断增加,边缘云负载的压力增加得如此之多,以至于边缘云无法为用户提供及时有效的服务。为了满足对时延敏感的应用服务的高要求,一方面,应减少边缘设备的能耗,并应将计算量大的模块调度到远程服务器上执行。另一方面,提出了自动缩放模型以提高租户实例的总成本。在动态副本管理模型中,减少了响应时间和能源消耗,此外,还平衡了主机中的工作负载。在自动缩放的实验中,我们提出的算法的累积总成本和功耗低于CAAS和MLC算法。实验两小时后,我们提出的算法的CPU利用率高于CAAS和MLC的CPU利用率。在动态副本放置实验中,本文数据迁移算法的平均响应时间和负载均衡值均低于BA和LA。我们的副本放置方法的可用存储空间比DDD和WA更加平衡。在动态副本管理模型中,减少了响应时间和能源消耗,此外,还平衡了主机中的工作负载。在自动缩放的实验中,我们提出的算法的累积总成本和功耗低于CAAS和MLC算法。实验两小时后,我们提出的算法的CPU利用率高于CAAS和MLC的CPU利用率。在动态副本放置实验中,本文数据迁移算法的平均响应时间和负载均衡值均低于BA和LA。我们的副本放置方法的可用存储空间比DDD和WA更加平衡。在动态副本管理模型中,减少了响应时间和能源消耗,此外,还平衡了主机中的工作负载。在自动缩放的实验中,我们提出的算法的累积总成本和功耗低于CAAS和MLC算法。实验两小时后,我们提出的算法的CPU利用率高于CAAS和MLC的CPU利用率。在动态副本放置实验中,本文数据迁移算法的平均响应时间和负载均衡值均低于BA和LA。我们的副本放置方法的可用存储空间比DDD和WA更加平衡。在自动缩放的实验中,我们提出的算法的累积总成本和功耗低于CAAS和MLC算法。实验两小时后,我们提出的算法的CPU利用率高于CAAS和MLC的CPU利用率。在动态副本放置实验中,本文数据迁移算法的平均响应时间和负载均衡值均低于BA和LA。我们的副本放置方法的可用存储空间比DDD和WA更加平衡。在自动缩放的实验中,我们提出的算法的累积总成本和功耗低于CAAS和MLC算法。实验两小时后,我们提出的算法的CPU利用率高于CAAS和MLC的CPU利用率。在动态副本放置实验中,本文数据迁移算法的平均响应时间和负载均衡值均低于BA和LA。我们的副本放置方法的可用存储空间比DDD和WA更加平衡。在动态副本放置实验中,本文数据迁移算法的平均响应时间和负载均衡值均低于BA和LA。我们的副本放置方法的可用存储空间比DDD和WA更加平衡。在动态副本放置实验中,本文数据迁移算法的平均响应时间和负载均衡值均低于BA和LA。我们的副本放置方法的可用存储空间比DDD和WA更加平衡。

更新日期:2021-02-23
down
wechat
bug