当前位置: X-MOL 学术ACM Trans. Auton. Adapt. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Improving Data-Analytics Performance Via Autonomic Control of Concurrency and Resource Units
ACM Transactions on Autonomous and Adaptive Systems ( IF 2.2 ) Pub Date : 2019-03-15 , DOI: 10.1145/3309539
Gil Jae Lee 1 , José A. B. Fortes 1
Affiliation  

Many big-data processing jobs use data-analytics frameworks such as Apache Hadoop (currently also known as YARN). Such frameworks have tunable configuration parameters set by experienced system administrators and/or job developers. However, tuning parameters manually can be hard and time-consuming because it requires domain-specific knowledge and understanding of complex inter-dependencies among parameters. Most of the frameworks seek efficient resource management by assigning resource units to jobs, the maximum number of units allowed in a system being part of the static configuration of the system. This static resource management has limited effectiveness in coping with job diversity and workload dynamics, even in the case of a single job. The work reported in this article seeks to improve performance (e.g., multiple-jobs makespan and job completion time) without modification of either the framework or the applications and avoiding problems of previous self-tuning approaches based on performance models or resource usage. These problems include (1) the need for time-consuming training, typically offline and (2) unsuitability for multi-jobs/tenant environments. This article proposes a hierarchical self-tuning approach using (1) a fuzzy-logic controller to dynamically adjust the maximum number of concurrent jobs and (2) additional controllers (one for each cluster node) to adjust the maximum number of resource units assigned to jobs on each node. The fuzzy-logic controller uses fuzzy rules based on a concave-downward relationship between aggregate CPU usage and the number of concurrent jobs. The other controllers use a heuristic algorithm to adjust the number of resource units on the basis of both CPU and disk IO usage by jobs. To manage the maximum number of available resource units in each node, the controllers also take resource usage by other processes (e.g., system processes) into account. A prototype of our approach was implemented for Apache Hadoop on a cluster running at CloudLab. The proposed approach was demonstrated and evaluated with workloads composed of jobs with similar resource usage patterns as well as other realistic mixed-pattern workloads synthesized by SWIM, a statistical workload injector for MapReduce. The evaluation shows that the proposed approach yields up to a 48% reduction of the jobs makespan that results from using Hadoop-default settings.

中文翻译:

通过并发和资源单元的自主控制提高数据分析性能

许多大数据处理作业使用数据分析框架,例如 Apache Hadoop(目前也称为 YARN)。此类框架具有由经验丰富的系统管理员和/或工作开发人员设置的可调配置参数。然而,手动调整参数可能既困难又耗时,因为它需要特定领域的知识和对参数之间复杂相互依赖关系的理解。大多数框架通过将资源单元分配给作业来寻求有效的资源管理,系统中允许的最大单元数是系统静态配置的一部分。这种静态资源管理在应对工作多样性和工作负载动态方面的有效性有限,即使在单个工作的情况下也是如此。本文报告的工作旨在提高性能(例如,多个作业的制造时间和作业完成时间),无需修改框架或应用程序,并避免以前基于性能模型或资源使用的自我调整方法的问题。这些问题包括 (1) 需要耗时的培训,通常是离线培训和 (2) 不适合多作业/租户环境。本文提出了一种分层自调整方法,使用 (1) 模糊逻辑控制器动态调整并发作业的最大数量和 (2) 附加控制器(每个集群节点一个)来调整分配给的资源单元的最大数量每个节点上的作业。模糊逻辑控制器使用基于总 CPU 使用率和并发作业数之间的凹向下关系的模糊规则。其他控制器使用启发式算法根据作业的 CPU 和磁盘 IO 使用情况来调整资源单元的数量。为了管理每个节点中可用资源单元的最大数量,控制器还将其他进程(例如,系统进程)的资源使用情况考虑在内。我们方法的原型是在 CloudLab 上运行的集群上为 Apache Hadoop 实现的。所提出的方法通过由具有相似资源使用模式的作业组成的工作负载以及由 MapReduce 的统计工作负载注入器 SWIM 合成的其他现实混合模式工作负载进行了演示和评估。评估表明,由于使用 Hadoop 默认设置,所提出的方法可将作业生产时间缩短 48%。
更新日期:2019-03-15
down
wechat
bug