当前位置: X-MOL 学术Cluster Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An intelligent approach for predicting resource usage by combining decomposition techniques with NFTS network
Cluster Computing ( IF 4.4 ) Pub Date : 2020-05-02 , DOI: 10.1007/s10586-020-03099-x
Seyedeh Yasaman Rashida , Masoud Sabaei , Mohammad Mehdi Ebadzadeh , Amir Masoud Rahmani

Time sensitive virtual machines that run real-time control tasks are constrained by hard timing requirements. Optimal resource management and guarantee the hard timing requirements of virtual machines are critical goals. Basically, cloud resource usage predicting and resource reservation play a crucial role to achieve these two goals. So, we propose a predicting approach based on two-phase decomposition method and hybrid neural network to predict future resource usage. This paper uses a clustering method based on the AnYa algorithm in an on-line manner in order to obtain the number of fuzzy rules and the initial value of the premise and consequent parameters. Since cloud resource usage varies widely from time to time and server to server, extracting the best time series model for predicting cloud resource usage depend not only on time but on the cloud resource usage trend. For this, we present a recursive hybrid technique based on singular spectrum analysis and adaptively fast ensemble empirical mode decomposition to identify the hidden characteristics of the time series data. This method tries to extract seasonal and irregular components of the time series. According to the simulation results, it can be found that the proposed model can have significantly better performance than the three comparison models from one-step to six-step CPU usage predictions with the MAPE of 33.83% average performance promotion, MAE of 36.54% average performance promotion, RMSE of 36.70% average performance promotion.



中文翻译:

通过将分解技术与NFTS网络相结合来预测资源使用的智能方法

运行实时控制任务的对时间敏感的虚拟机受到硬定时要求的限制。最佳的资源管理和确保虚拟机的严格计时要求是关键目标。基本上,云资源使用预测和资源预留对于实现这两个目标至关重要。因此,我们提出了一种基于两阶段分解方法和混合神经网络的预测方法,以预测未来的资源使用情况。为了获得模糊规则的数量,前提的初始值和随后的参数,本文采用了一种基于AnYa算法的聚类方法。由于云资源的使用随服务器和服务器的不时变化很大,提取最佳时间序列模型以预测云资源使用情况不仅取决于时间,还取决于云资源使用趋势。为此,我们提出了一种基于奇异频谱分析和自适应快速集成经验模式分解的递归混合技术,以识别时间序列数据的隐藏特征。此方法尝试提取时间序列的季节性和不规则成分。根据仿真结果,发现从单步到六步的CPU使用率预测,所提出的模型的性能要比三个比较模型好得多,平均MAPE提升为33.83%,MAE平均为36.54%绩效提升,RMSE平均为36.70%。我们提出一种基于奇异频谱分析和自适应快速集成经验模式分解的递归混合技术,以识别时间序列数据的隐藏特征。此方法尝试提取时间序列的季节性和不规则成分。根据仿真结果,发现从单步到六步的CPU使用率预测,所提出的模型的性能要比三个比较模型好得多,平均MAPE提升为33.83%,MAE平均为36.54%绩效提升,RMSE平均为36.70%。我们提出一种基于奇异频谱分析和自适应快速集成经验模式分解的递归混合技术,以识别时间序列数据的隐藏特征。此方法尝试提取时间序列的季节性和不规则成分。根据仿真结果,发现从单步到六步的CPU使用率预测,所提出的模型的性能要比三个比较模型好得多,平均MAPE提升为33.83%,MAE平均为36.54%绩效提升,RMSE平均为36.70%。

更新日期:2020-05-02
down
wechat
bug