Skip to main content
Log in

Design and implementation of an I/O isolation scheme for key-value store on multiple solid-state drives

  • Published:
Cluster Computing Aims and scope Submit manuscript

Abstract

High-performance storage devices, such as Non-Volatile Memory express Solid-State Drives (NVMe SSDs), have been widely adopted in data centers. Especially, multiple storage devices provide higher I/O performance compared with a single device. However, the performance can be reduced in the case of workloads with mixed read and write requests (e.g., key-value stores) even though multiple storage devices are adopted. This is because read requests can be blocked until the processing for write requests is finished. In this article, we propose an I/O isolation scheme to improve the performance of the key-value store for multiple SSDs. In our scheme, we classify files of the key-value store and deploy files to the separated storage devices according to the characteristics of each file. Thus, read/write operations are performed in different storage devices. In addition, we propose two different device mapping methods, namely fixed and adaptive device mapping to deploy files to the proper device. We implement our scheme in RocksDB with multiple storage devices (six NVMe SSDs) and extend our scheme on an open-channel SSD, which reveals internal hardware architecture to verify the effectiveness of read/write isolation within a single storage device. The experimental results demonstrate that our scheme improves performance by up to 29% and 26% in the open-channel SSD and multiple storage devices, respectively, compared with the existing scheme.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. The page data that is differ from the data stored in the flash chips. Dirty pages contain the most recent data of the application. Therefore, the controller of the storage device must write the contents of dirty pages to the flash chips when the pages are evicted from the buffer cache.

  2. The terms L0 and L1+ refer to Level 0 and the higher level, respectively. SST stands for Sorted String Table. The terminologies are explained in detail in Sects. 2.1 and 2.2.

References

  1. Kim, H., Yeom, H. Y., Son, Y.: An I/O Isolation Scheme for key-value store on multiple solid-state drives. In: 2019 IEEE 4th International Workshops on Foundations and Applications of Self* Systems (FAS* W), pp. 170–175 (2019)

  2. Xu, Q., Siyamwala, H., Ghosh, M., Suri, T., Awasthi, M., Guz, Z., Shayesteh, A., Balakrishnan, V.: Performance analysis of NVMe SSDs and their implication on real world databases. In: Proceedings of the 8th ACM International Systems and Storage Conference, p. 6 (2015)

  3. Bates, S.: Accelerating data centers using NVMe and CUDA. Flash Memory Summit (2014)

  4. KeunSoo, J.: Scaling from datacenter to client. Flash Memory Summit (2014)

  5. Thummarukudy, R.: Designing a configurable NVM express controller/subsystem. Flash Memory Summit (2014)

  6. O’Neil, P., Cheng, E., Gawlick, D., O’Neil, E.: The log-structured merge-tree (LSM-tree). Acta Inform. 33(4), 351–385 (1996)

    Article  Google Scholar 

  7. LevelDB. http://leveldb.org/

  8. Apache HBase. http://hbase.apache.org

  9. RocksDB. https://rocksdb.org/

  10. Papaioannou, A., Magoutis, K.: Replica-group leadership change as a performance enhancing mechanism in NoSQL data stores. In: 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS), pp.1448–1453 (2018)

  11. Chen, F., Lee, R., Zhang, X.: Essential roles of exploiting internal parallelism of flash memory based solid state drives in high-speed data processing. In: 2011 IEEE 17th International Symposium on High Performance Computer Architecture, pp. 266–277 (2011)

  12. Kang, W.H., Lee, S.W., Moon, B., Kee, Y.S., Oh, M.: Durable write cache in flash memory SSD for relational and NoSQL databases. In: Proceedings of the 2014 ACM SIGMOD international conference on Management of data, pp. 529–540 (2014)

  13. González, J., Bjørling, M., Lee, S., Dong, C. and Huang, Y.R.: Application-driven flash translation layers on open-channel SSDs. In: Proceedings of the 7th non Volatile Memory Workshop (NVMW), pp. 1–2 (2016)

  14. Yang, J., Pandurangan, R., Choi, C., Balakrishnan, V.: AutoStream: automatic stream management for multi-streamed SSDs. In: Proceedings of the 10th ACM International Systems and Storage Conference, p. 3 (2017)

  15. Rho, E., Joshi, K., Shin, S.U., Shetty, N.J., Hwang, J., Cho, S., Lee, D.D., Jeong, J.: FStream: managing flash streams in the file system. In: 16th USENIX Conference on File and Storage Technologies (FAST 18), pp. 257–264 (2018)

  16. Bjørling, M., González, J., Bonnet, P.: LightNVM: the Linux open-channel SSD subsystem. In: 15th USENIX Conference on File and Storage Technologies (FAST 17), pp. 359–374 (2017)

  17. Wang, P., Sun, G., Jiang, S., Ouyang, J., Lin, S., Zhang, C., Cong, J.: An efficient design and implementation of LSM-tree based key-value store on open-channel SSD. In: Proceedings of the Ninth European Conference on Computer Systems, p. 16 (2016)

  18. CNEX Labs. https://www.cnexlabs.com/

  19. User space I/O library for Open-Channel SSDs. http://lightnvm.io/liblightnvm/

  20. Samsung NVMe SSD 960 PRO. https://www.samsung.com/semiconductor/minisite/ssd/product/consumer/960pro/

  21. OpenChannelSSD/rocksdb repository. https://github.com/OpenChannelSSD/rocksdb

  22. RocksDB benchmarking tools. https://github.com/facebook/rocksdb/wiki/Benchmarking-tools

  23. Cooper, B.F., Silberstein, A., Tam, E., Ramakrishnan, R., Sears, R.: Benchmarking cloud serving systems with YCSB. In: Proceedings of the 1st ACM symposium on Cloud computing pp. 143–154 (2010)

  24. Skourtis, D., Achlioptas, D., Watkins, N., Maltzahn, C., Brandt, S.: Flash on rails: consistent flash performance through redundancy. In: 2014 USENIX Annual Technical Conference (USENIXATC 14), pp.463–474 (2014)

  25. Lee, M., Kang, D.H., Lee, M., Eom, Y.I.: Improving read performance by isolating multiple queues in NVMe SSDs. In: Proceedings of the 11th International Conference on Ubiquitous Information Management and Communication, p. 36 (2017)

  26. Kang, J.U., Hyun, J., Maeng, H., Cho, S.: The multi-streamed solid-state drive. In: 6th USENIX Workshop on Hot Topics in Storage and File Systems (HotStorage 14) (2014)

  27. Bhimani, J., Mi, N., Yang, Z., Yang, J., Pandurangan, R., Choi, C., Balakrishnan, V.: FIOS: feature based I/O stream identification for improving endurance of multi-stream SSDs. In: 2018 IEEE 11th International Conference on Cloud Computing (CLOUD), pp. 17–24 (2018)

  28. Kim, J., Lee, K.: I/O resource isolation of public cloud serverless function runtimes for data-intensive applications. Cluster Computing, pp. 1–11. Springer (2020)

  29. Li, D., Dong, M., Tang, Y., Ota, K.: A novel disk I/O scheduling framework of virtualized storage system. Clust. Comput. 22(1), 2395–2405 (2019)

    Article  Google Scholar 

  30. Pathak, A. R., Pandey, M., Rautaray, S. S.: Approaches of enhancing interoperations among high performance computing and big data analytics via augmentation. Clust. Comput. pp. 1–36 (2019) Springer

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongseok Son.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A preliminary version [1] of this article was presented at the 7th International Workshop on Autonomic Management of high performance Grid and Cloud Computing (AMGCC’19), Umeå, Sweden, Jun. 2019.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kim, H., Yeom, H.Y. & Son, Y. Design and implementation of an I/O isolation scheme for key-value store on multiple solid-state drives. Cluster Comput 23, 2301–2313 (2020). https://doi.org/10.1007/s10586-020-03161-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10586-020-03161-8

Navigation