当前位置: X-MOL 学术Front. Comput. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A multi-objective reinforcement learning algorithm for deadline constrained scientific workflow scheduling in clouds
Frontiers of Computer Science ( IF 3.4 ) Pub Date : 2021-05-27 , DOI: 10.1007/s11704-020-9273-z
Yao Qin , Hua Wang , Shanwen Yi , Xiaole Li , Linbo Zhai

Recently, a growing number of scientific applications have been migrated into the cloud. To deal with the problems brought by clouds, more and more researchers start to consider multiple optimization goals in workflow scheduling. However, the previous works ignore some details, which are challenging but essential. Most existing multi-objective workflow scheduling algorithms overlook weight selection, which may result in the quality degradation of solutions. Besides, we find that the famous partial critical path (PCP) strategy, which has been widely used to meet the deadline constraint, can not accurately reflect the situation of each time step. Workflow scheduling is an NP-hard problem, so self-optimizing algorithms are more suitable to solve it.

In this paper, the aim is to solve a workflow scheduling problem with a deadline constraint. We design a deadline constrained scientific workflow scheduling algorithm based on multi-objective reinforcement learning (RL) called DCMORL. DCMORL uses the Chebyshev scalarization function to scalarize its Q-values. This method is good at choosing weights for objectives. We propose an improved version of the PCP strategy called MPCP. The sub-deadlines in MPCP regularly update during the scheduling phase, so they can accurately reflect the situation of each time step. The optimization objectives in this paper include minimizing the execution cost and energy consumption within a given deadline. Finally, we use four scientific workflows to compare DCMORL and several representative scheduling algorithms. The results indicate that DCMORL outperforms the above algorithms. As far as we know, it is the first time to apply RL to a deadline constrained workflow scheduling problem.



中文翻译:

云中期限约束科学工作流调度的多目标强化学习算法

最近,越来越多的科学应用程序已迁移到云中。为了解决云带来的问题,越来越多的研究人员开始在工作流调度中考虑多个优化目标。然而,先前的作品忽略了一些细节,这些细节具有挑战性,但必不可少。大多数现有的多目标工作流调度算法都忽略了权重选择,这可能会导致解决方案的质量下降。此外,我们发现著名的局部关键路径(PCP)策略已被广泛用于满足截止期限约束,但不能准确反映每个时间步的情况。工作流调度是一个NP难题,因此自优化算法更适合解决该问题。

本文旨在解决具有期限约束的工作流调度问题。我们设计了一种基于截止日期约束的科学工作流调度算法,该算法基于称为DCMORL的多目标强化学习(RL)。DCMORL使用Chebyshev标量函数标定其Q值。该方法擅长为目标选择权重。我们提出了PCP策略的改进版本,称为MPCP。MPCP中的子期限在计划阶段会定期更新,因此它们可以准确反映每个时间步的情况。本文的优化目标包括在给定的期限内将执行成本和能耗降至最低。最后,我们使用四个科学工作流程来比较DCMORL和几种代表性的调度算法。结果表明,DCMORL优于上述算法。据我们所知,这是第一次将RL应用于期限受限的工作流计划问题。

更新日期:2021-05-27
down
wechat
bug