当前位置: X-MOL 学术J. Netw. Syst. Manag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Decomposition Based Cloud Resource Demand Prediction Using Extreme Learning Machines
Journal of Network and Systems Management ( IF 4.1 ) Pub Date : 2020-07-31 , DOI: 10.1007/s10922-020-09557-6
Jitendra Kumar , Ashutosh Kumar Singh

Cloud computing has drastically transformed the means of computing in past few years. Apart from numerous advantages, it suffers with a number of issues including resource under-utilization, load balancing and power consumption. The workload prediction is being widely explored to solve these issues using time series analysis regression and neural networks based models. The time series analysis based models are unable to capture the dynamics in the workload behavior whereas neural network based models offer better accuracy on the cost of high training time. This paper presents a workload prediction model based on extreme learning machines (ELM) whose learning time is very low and forecasts the workload more accurately. The performance of the model is evaluated over two real world cloud server workloads i.e. CPU and Memory demand traces of Google cluster and compared with predictive models based on state-of-art techniques including Auto Regressive Integrated Moving Average (ARIMA), Support Vector Regression (SVR), Linear Regression (LR), Differential Evolution (DE), Blackhole Algorithm (BhA), and Propagation (BP). It is observed that the proposed model outperforms the state-of-art techniques by reducing the mean prediction error up to 100% and 99% on CPU and memory request traces respectively.

中文翻译:

使用极限学习机进行基于分解的云资源需求预测

云计算在过去几年中彻底改变了计算方式。除了众多优点外,它还存在许多问题,包括资源未充分利用、负载平衡和功耗。正在广泛探索工作负载预测以使用时间序列分析回归和基于神经网络的模型来解决这些问题。基于时间序列分析的模型无法捕捉工作负载行为的动态,而基于神经网络的模型在高训练时间成本方面提供更好的准确性。本文提出了一种基于极限学习机(ELM)的工作负载预测模型,该模型的学习时间非常短,可以更准确地预测工作负载。该模型的性能在两个真实世界的云服务器工作负载上进行评估,即 谷歌集群的 CPU 和内存需求轨迹,并与基于最先进技术的预测模型进行比较,包括自回归综合移动平均 (ARIMA)、支持向量回归 (SVR)、线性回归 (LR)、差分进化 (DE)、黑洞算法 (BhA) 和传播 (BP)。观察到,所提出的模型通过将 CPU 和内存请求跟踪的平均预测误差分别降低到 100% 和 99%,从而优于最先进的技术。
更新日期:2020-07-31
down
wechat
bug