当前位置: X-MOL 学术Wireless Pers. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Optimize Task Allocation in Cloud Environment Based on Big-Bang Big-Crunch
Wireless Personal Communications ( IF 1.9 ) Pub Date : 2020-07-29 , DOI: 10.1007/s11277-020-07651-1
Pradeep Singh Rawat , Priti Dimri , Soumen Kanrar , Gyanendra Pal Saroha

Efficient resource allocation is indispensable in the current scenario of a service-oriented computing paradigm. Instance allocation to the host and the task allocation to the instance depends on the efficiency of scheduling technique. In this work, we exhibit the provisioning of tasks or cloudlets on a virtual machine. The Big-Bang Big-Crunch-cost model is proposed for efficient resource allocation. The proposed technique supports the principle of optimization method and performance is measured using makespan and resource cost. Our proposed cost-aware Big-Bang- Big-Crunch model, provides an optimal solution using the IaaS (Infrastructure as a service) model. It supports dynamic and independent task allocation on virtual machines. The proposed technique proclaims an evolution scheme that measures an objective function depends on performance metrics cost and time respectively. The input dataset defines the number of host nodes and datacenter configuration. The learning, evolution-based on BB-BC cost-aware method provides a globally optimal solution in a dynamic resource provisioning environment. Our approach effectively finds optimal simulation results than existing static, dynamic, and bio-inspired evolutionary provisioning techniques. Simulation results are exhibited that the cost-aware Big-Bang Big-Crunch method illustrates an adequate schedule of tasks on respective virtual machines. Reliability is measured using the operational cost of the resources in execution duration. Efficient resource utilization and the global optimum solution depends on the fitness function. The simulation results illustrate that our cost-aware astrology based soft computing methodology provides better results than time aware and cost-aware scheduling approaches. From simulation results, it is observed that Big-Bang Big-Crunch Cost aware proposed methodology improves average finish time by 15.23% with user requests 300, and average finish time improves by 19.18% with population size 400. The performance metric average resource cost enhancement by 30.46% with population size 400. The infrastructure cloud is considered for the performance measurement of the proposed cost-aware model which is constituted using static, dynamic, and meta-heuristic bio-inspired resource allocation techniques.



中文翻译:

基于Big-Bang Big-Crunch的云环境中的任务分配优化

在面向服务的计算范例的当前方案中,有效的资源分配是必不可少的。对主机的实例分配和对实例的任务分配取决于调度技术的效率。在这项工作中,我们展示了虚拟机上任务或cloudlet的配置。提出了Big-Bang Big-Crunch-cost模型来进行有效的资源分配。所提出的技术支持优化方法的原理,并且使用制造时间和资源成本来测量性能。我们提出的成本感知型Big-Bang-Big-Crunch模型使用IaaS(基础设施即服务)模型提供了最佳解决方案。它支持在虚拟机上进行动态和独立的任务分配。所提出的技术宣称一种衡量目标函数的演进方案,该演进方案分别取决于性能指标成本和时间。输入数据集定义主机节点的数量和数据中心配置。基于BB-BC成本意识的基于学习的演化方法在动态资源供应环境中提供了全局最佳解决方案。与现有的静态,动态和受生物启发的进化配置技术相比,我们的方法可有效地找到最佳的模拟结果。仿真结果表明,具有成本意识的Big-Bang Big-Crunch方法可以说明相应虚拟机上任务的适当调度。可靠性是通过执行期间资源的运营成本来衡量的。有效的资源利用和全局最佳解决方案取决于适应度函数。仿真结果表明,与基于时间意识和成本意识的调度方法相比,基于成本意识的占星术的软计算方法可提供更好的结果。从仿真结果可以看出,Big-Bang Big-Crunch成本意识提出的方法在用户请求300的情况下将平均完成时间提高了15.23%,在人口数量为400的情况下将平均完成时间提高了19.18%。性能指标平均资源成本的提高在人口规模为400的情况下降低了30.46%。考虑使用基础架构云来衡量建议的成本感知模型的性能,该模型使用静态,动态和元启发式生物启发型资源分配技术构成。

更新日期:2020-07-29
down
wechat
bug