Skip to main content
Log in

OMBM-ML: efficient memory bandwidth management for ensuring QoS and improving server utilization

  • Published:
Cluster Computing Aims and scope Submit manuscript

Abstract

As cloud data centers are dramatically growing, various applications are moved to cloud data centers owing to cost benefits for maintenance and hardware resources. However, latency-critical workloads among them suffer from some problems to fully achieve the cost-effectiveness. The latency-critical workloads should show latencies in a stable manner, to be predicted, for strictly meeting QoSs. However, if they are executed with other workloads to save the cost, they experience QoS violation due to the contention for the hardware resources shared with co-location workloads. In order to guarantee QoSs and to improve the hardware resource utilization, we proposed a memory bandwidth management method with an effective prediction model using machine learning. The prediction model estimates the amount of memory bandwidth that will be allocated to the latency-critical workload based on a REP decision tree. To construct this model, we first collect data and train the model with the data. The generated model can estimate the amount of memory bandwidth for meeting the SLO of the latency-critical workload no matter what batch processing workloads are collocated. The use of our approach achieves up to 99% SLO assurance and improves the server utilization up to 6.8\(\times\) on average.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Intel Performance Counter Monitor. https://software.intel.com/en-us/articles/intel-performance-counter-monitor

  2. STREAM Benchmark. http://www.cs.virginia.edu/stream/ref.html

  3. Amy Ousterhout, J.B., Joshua Fried, A.B., Hari Balakrishnan, M.C.: Shenango: achieving high CPU efficiency for latency-sensitive datacenter workloads. In: Proceedings of the 16th USENIX Conference on Networked Systems Design and Implementation (2019)

  4. Azimi, R., Kwon, Y., Elnikety, S., Syamala, M., Narasayya, V., Herodotou, H., Microsoft, P.T., Alex, B., Microsoft, C., Jack, B.,  Microsoft, Z., Wang, B.J.,  Bing, M.: PerfIso: Performance Isolation for Commercial Latency-Sensitive Services C: alin Iorgulescu* EPFL. Technical report (2018)

  5. Barroso, L.A., Clidaras, J., Hoelzle, U.: The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines. Morgan & Claypool Publishers, San Rafael (2013)

    Google Scholar 

  6. Chen, Q., Wang, Z., Leng, J., Li, C., Zheng, W., Guo Avalon, M.: Towards QoS awareness and improved utilization through multi-resource management in datacenters. In: Proceedings of the International Conference on Supercomputing, pp. 272–283, New York, NY, USA, Jun 2019. Association for Computing Machinery

  7. Dauwe, D., Jonardi, E.,  Friese, R.,  Pasricha, S., Maciejewski, A.A., Bader, D.A., Siegel, H.J.: A methodology for co-location aware application performance modeling in multicore computing. In: 2015 IEEE International Parallel and Distributed Processing Symposium Workshop, pp. 434–443, May 2015

  8. Delimitrou, C.,  Kozyrakis, C.: Quasar: Resource-efficient and qos-aware cluster management. In: Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS ’14, pp. 127–144, New York, NY, USA (2014). ACM

  9. Desai, N., Cirne, W.: Job Scheduling Strategies for Parallel Processing, pp. 274–278. Springer, Cham (2017)

    Book  Google Scholar 

  10. Di, S., Kondo, D., Cirne, W.: Characterization and comparison of cloud versus grid workloads. In: 2012 IEEE International Conference on Cluster Computing, pp. 230–238, Sept 2012

  11. Dwyer, T. , Fedorova, A.,  Blagodurov, S., Roth, M., Gaud, F.,  Pei, J.: A practical method for estimating performance degradation on multicore processors, and its application to hpc workloads. In: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, SC ’12, pp. 83:1–83:11, Los Alamitos, CA, USA, 2012. IEEE Computer Society Press

  12. Huang, S.,  Huang, J., Dai, J.,  Xie, T.,  Huang, B.: The hibench benchmark suite: characterization of the mapreduce-based data analysis. In: 2010 IEEE 26th International Conference on Data Engineering Workshops (ICDEW 2010), pp. 41–51, March 2010

  13. Hurt, K., John, E.: Analysis of memory sensitive spec cpu2006 integer benchmarks for big data benchmarking. In: Proceedings of the 1st Workshop on Performance Analysis of Big Data Systems, PABS ’15, pp. 11–16, New York, NY, USA (2015). ACM

  14. Kalmegh, S.: Analysis of weka data mining algorithm reptree, simple cart and randomtree for classification of indian news. Int. J. Innov. Sci. Eng. Technol 2(2), 438–446 (2015)

    Google Scholar 

  15. Kasture, H., Sanchez, D.: Ubik: efficient cache sharing with strict qos for latency-critical workloads. In: Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS ’14, pp. 729–742, New York, NY, USA (2014). ACM

  16. Kasture, H., Sanchez, D.: Tailbench: a benchmark suite and evaluation methodology for latency-critical applications. In: 2016 IEEE International Symposium on Workload Characterization (IISWC), pp. 1–10, Sept 2016

  17. Kyungyoung, C., Park, R.C.: Cloud based u-healthcare network with QoS guarantee for mobile health service. In: Cluster Computing (2017)

  18. Lakshmi Devasena, C.: Comparative analysis of random forest, rep tree and j48 classifiers for credit risk prediction. In: International Journal of Computer Applications (0975-8887), International Conference on Communication, Computing and Information Technology (ICCCMIT-2014) (2014)

  19. Li Chunlin, T.J.,  Youlong, L.: Distributed QoS-aware scheduling optimization for resource-intensive mobile application in hybrid cloud. In: Cluster Computing (2017)

  20. Lo, D.,  Cheng, L., Govindaraju, R.,  Ranganathan, P., Kozyrakis, C.: Heracles: Improving resource efficiency at scale. In: Proceedings of the 42nd Annual International Symposium on Computer Architecture, ISCA ’15, pp. 450–462, New York, NY, USA (2015). ACM

  21. Mahmoud, Z.H.A., Badawy, M., Ali, H.A.: QoS provisioning framework for service-oriented internet of things (IoT). In: Cluster Computing (2019)

  22. Mars, J.,  Tang, L.,  Hundt, R., Skadron, K., Soffa, M.L.: Bubble-up: Increasing utilization in modern warehouse scale computers via sensible co-locations. In: Proceedings of the 44th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO-44, pp. 248–259, New York, NY, USA (2011). ACM

  23. Anithadevi, N., Sundarambal, M.: A design of intelligent QoS aware web service recommendation system. In: Cluster Computing (2018)

  24. Nathuji, R.,  Kansal, A.,  Ghaffarkhah, A.: Q-clouds: managing performance interference effects for qos-aware clouds. In: Proceedings of the 5th European Conference on Computer Systems, EuroSys ’10, pp. 237–250, New York, NY, USA (2010). ACM

  25. Patel, T.,  Tiwari, D.: CLITE: efficient and QoS-aware co-location of multiple latency-critical jobs for warehouse scale computers. In Proceedings—2020 IEEE International Symposium on High Performance Computer Architecture, HPCA 2020, pp. 193–206. Institute of Electrical and Electronics Engineers Inc., Feb 2020

  26. Santiago Felici-Castell, J.S.G., Garcia-Pineda, M.: Adaptive QoE-based architecture on cloud mobile media for live streaming. In: Cluster Computing (2018)

  27. Sukhpal Singh Gill, M.S., Charana, I., Buyya, R.: CHOPPER: an intelligent QoS-aware autonomic resource management approach for cloud computing. In: Cluster Computing (2017)

  28. Sung, H., Min, J.,  Ha, S.,  Eom, H.: OMBM: optimized memory bandwidth management for ensuring QoS and high server utilization. In: 2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems (FAS*W), pp. 269–276. IEEE, Sep 2017

  29. Witten, I.,  Frank, E., Hall, M. A., Pal, C. J.: Data Mining: Practical Machine Learning Tools and Techniques (2016)

  30. Xu, C.,  Felter, W.,  Rajamani, K., Rubio, J., Ferreira, A.,  Li, Y.: dCat: dynamic cache management for efficient, performance-sensitive infrastructure-as-a-service. In: Proceedings of the 13th EuroSys Conference, EuroSys 2018, volume 2018-January, pp. 1–13, New York, NY, USA, Apr 2018. Association for Computing Machinery, Inc.

  31. Yang, H.,  Breslow, A.,  Mars, J.,  Tang, L.: Bubble-flux: Precise online qos management for increased utilization in warehouse scale computers. In: Proceedings of the 40th Annual International Symposium on Computer Architecture, ISCA’13, pp. 607–618, New York, NY, USA (2013). ACM

  32. Yang, X., Blackburn, S. M., McKinley, K. S.: Elfen scheduling: Fine-grain principled borrowing from latency-critical workloads using simultaneous multithreading. In: 2016 USENIX Annual Technical Conference (USENIX ATC 16), pp. 309–322, Denver, CO, 2016. USENIX Association

  33. Yongfeng Cui, Y. M., Zhongyuan Zhao,  Dong, S.: Resource allocation algorithm design of high quality of service based on chaotic neural network in wireless communication technology. In: Cluster Computing (2017)

  34. Yun, H., Yao, G., Pellizzoni, R., Caccamo, M.,  Sha, L.: Memguard: memory bandwidth reservation system for efficient performance isolation in multi-core platforms. In: 2013 IEEE 19th Real-Time and Embedded Technology and Applications Symposium (RTAS), pp. 55–64, April, 2013

  35. Zhang, W.,  Cui, W.,  Fu, K.,  Chen, Q., Mawhirter, D. E.,  Wu, B., Li, C., Guo, M.: Laius: towards latency awareness and improved utilization of spatial multitasking accelerators in datacenters. In: Proceedings of the International Conference on Supercomputing, pages 58–68. Association for Computing Machinery, Jun, 2019

  36. Zhu, H., Erez, M.: Dirigent: enforcing qos for latency-critical tasks on shared multicore systems. In: Proceedings of the Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS ’16, pp. 33–47, New York, NY, USA (2016). ACM

Download references

Acknowledgements

This research was supported by (1) Institute for Information & communications Technology Promotion (IITP) Grant funded by the Korea government (MSIP) (R0190-16-2012, High Performance Big Data Analytics Platform Performance Acceleration Technologies Development). It was also partly supported by (2) National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIP) (NRF-2017R1A2B4004513, Optimizing GPGPU virtualization in multi GPGPU environments through kernels’ concurrent execution-aware scheduling), and partly supported by (3) the National Research Foundation (NRF) Grant (NRF-2016M3C4A7952587, PF Class Heterogeneous High Performance Computer Development). In addition, this work was partly supported by (4) BK21 FOUR Intelligence Computing (Dept. of Computer Science and Engineering, SNU) funded by National Research Foundation of Korea (NRF) (Grant 4199990214639).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hyeonsang Eom.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sung, H., Min, J., Koo, D. et al. OMBM-ML: efficient memory bandwidth management for ensuring QoS and improving server utilization. Cluster Comput 24, 181–193 (2021). https://doi.org/10.1007/s10586-020-03191-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10586-020-03191-2

Keywords

Navigation