Skip to main content

THRCache: DRAM-NVM Multi-level Cache with Thresholded Heterogeneous Random Choices

  • Conference paper
  • First Online:
Algorithms and Architectures for Parallel Processing (ICA3PP 2023)

Abstract

Caching is essential for accelerating data access, balancing storage cluster load and improving quality of service. However, single-node cache may become a bottleneck as the storage system scales up. Distributed caching is proposed to provide caching services for large-scale storage cluster. However, existed cache mechanisms based on cache partitioning or cache replication may lead to load imbalance and high coherency overhead. We propose THRCache, a multi-level heterogeneous distributed cache mechanism which combines the speed of DRAM with the high-capacity of NVM to cache more easily accessible data. THRCache implements cache allocation for different cache layers by independent hash functions, routes query with a threshold random heterogeneous selection, and introduces a prefetching mechanism based on data access correlation. In this paper, we implement a prototype of THRCache and demonstrate through experiments that THRCache has higher cache hit rate and throughput than existed distributed cache architectures under different workloads.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Atikoglu, B., Xu, Y., Frachtenberg, E., Jiang, S., Paleczny, M.: Workload analysis of a large-scale key-value store. In: Proceedings of the 12th ACM SIGMETRICS/PERFORMANCE Joint International Conference on Measurement and Modeling of Computer Systems, pp. 53–64 (2012)

    Google Scholar 

  2. Memcached (2023). http://memcached.org/. Accessed 17 May 2023

  3. Redis (2023). http://redis.io/. Accessed 17 May 2023

  4. Manes, B.: A high performance caching library for Java 8 (2016). https://github.com/ben-manes/caffeine

  5. Luck, G., Suravarapu, S., King, G., Talevi, M.: Ehcache distributed cache system. http://www.ehcache.org/. Accessed 17 May 2023

  6. Fan, B., Lim, H., Andersen, D.G., Kaminsky, M.: Small cache, big effect: provable load balancing for randomly partitioned cluster services. In: Proceedings of the 2nd ACM Symposium on Cloud Computing, pp. 1–12 (2011)

    Google Scholar 

  7. Liu, Z., et al.: DistCache: provable load balancing for large-scale storage systems with distributed caching. In: FAST. vol. 19, pp. 143–157 (2019)

    Google Scholar 

  8. Cai, Z., Lin, J., Liu, F., Chen, Z., Li, H.: NVMCache: wear-aware load balancing nvm-based caching for large-scale storage systems. In: 2020 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom), pp. 657–665. IEEE (2020)

    Google Scholar 

  9. Liu, H.K., et al.: A survey of non-volatile main memory technologies: state-of-the-arts, practices, and future directions. J. Comput. Sci. Technol. 36, 4–32 (2021)

    Article  Google Scholar 

  10. Chen, H., Ruan, C., Li, C., Ma, X., Xu, Y.: SpanDB: a fast, cost-effective LSM-tree based KV store on hybrid storage. In: FAST. vol. 21, pp. 17–32 (2021)

    Google Scholar 

  11. Li, C., Chen, H., Ruan, C., Ma, X., Xu, Y.: Leveraging NVMe SSDs for building a fast, cost-effective, LSM-tree-based KV store. ACM Trans. Storage (TOS) 17(4), 1–29 (2021)

    Article  Google Scholar 

  12. Dong, S., Kryczka, A., Jin, Y., Stumm, M.: RocksDB: evolution of development priorities in a key-value store serving large-scale applications. ACM Trans. Storage (TOS) 17(4), 1–32 (2021)

    Article  Google Scholar 

  13. Eisenman, A., et al.: Flashield: a hybrid key-value cache that controls flash write amplification. In: NSDI, pp. 65–78 (2019)

    Google Scholar 

  14. Liu, J., Chai, Y., Qin, X., Xiao, Y.: PLC-cache: endurable SSD cache for deduplication-based primary storage. In: 2014 30th Symposium on Mass Storage Systems and Technologies (MSST), pp. 1–12. IEEE (2014)

    Google Scholar 

  15. Jiang, D., Che, Y., Xiong, J., Ma, X.: uCache: a utility-aware multilevel SSD cache management policy. In: 2013 IEEE 10th International Conference on High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing, pp. 391–398. IEEE (2013)

    Google Scholar 

  16. Yoon, S.K., Youn, Y.S., Kim, J.G., Kim, S.D.: Design of DRAM-NAND flash hybrid main memory and Q-learning-based prefetching method. J. Supercomput. 74, 5293–5313 (2018)

    Article  Google Scholar 

  17. Ozawa, K., Hirofuchi, T., Takano, R., Sugaya, M.: Fogcached: a DRAM/NVMM hybrid KVS server for edge computing. IEICE Trans. Inf. Syst. 104(12), 2089–2096 (2021)

    Article  Google Scholar 

  18. Mittal, S., Vetter, J.S.: A survey of software techniques for using non-volatile memories for storage and main memory systems. IEEE Trans. Parallel Distrib. Syst. 27(5), 1537–1550 (2015)

    Article  Google Scholar 

  19. Xia, F., Jiang, D., Xiong, J., Sun, N.: HiKV: a hybrid index Key-Value store for DRAM-NVM memory systems. In: 2017 USENIX Annual Technical Conference (USENIX ATC 17), pp. 349–362. USENIX Association, Santa Clara, CA (2017)

    Google Scholar 

  20. Kim, M., Kim, B.S., Lee, E., Lee, S.: A case study of a dram-nvm hybrid memory allocator for key-value stores. IEEE Comput. Archit. Lett. 21(2), 81–84 (2022)

    Article  Google Scholar 

  21. Li, Y., et al.: A multi-hashing index for hybrid dram-nvm memory systems. J. Syst. Architect. 128, 102547 (2022)

    Article  Google Scholar 

  22. Mitzenmacher, M.: The power of two choices in randomized load balancing. IEEE Trans. Parallel Distrib. Syst. 12(10), 1094–1104 (2001)

    Article  Google Scholar 

  23. Wang, S., Luo, J., Wong, W.S.: Improved power of two choices for fat-tree routing. IEEE Trans. Netw. Serv. Manage. 15(4), 1706–1719 (2018)

    Article  Google Scholar 

  24. Wang, A., et al.: InfiniCache: exploiting ephemeral serverless functions to build a cost-effective memory cache. In: Proceedings of the 18th USENIX Conference on File and Storage Technologies, pp. 267–282 (2020)

    Google Scholar 

  25. Rashmi, K., Chowdhury, M., Kosaian, J., Stoica, I., Ramchandran, K.: EC-Cache: load-balanced, low-latency cluster caching with online erasure coding. In: Osdi. vol. 16, pp. 401–417 (2016)

    Google Scholar 

  26. Zhang, M., Wang, Q., Shen, Z., Lee, P.P.: POCache: toward robust and configurable straggler tolerance with parity-only caching. J. Parallel Distrib. Comput. 167, 157–172 (2022)

    Article  Google Scholar 

  27. Vöcking, B.: How asymmetry helps load balancing. J. ACM (JACM) 50(4), 568–589 (2003)

    Article  MathSciNet  Google Scholar 

  28. Berenbrink, P., Czumaj, A., Steger, A., Vöcking, B.: Balanced allocations: the heavily loaded case. In: Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing, pp. 745–754 (2000)

    Google Scholar 

  29. Li, J., Nelson, J., Michael, E., Jin, X., Ports, D.R.: Pegasus: tolerating skewed workloads in distributed storage with \(\{\)In-Network\(\}\) coherence directories. In: 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), pp. 387–406 (2020)

    Google Scholar 

  30. Takruri, H., Kettaneh, I., Alquraan, A., Al-Kiswany, S.: \(\{\)FLAIR\(\}\): Accelerating reads with \(\{\)Consistency-Aware\(\}\) network routing. In: 17th USENIX Symposium on Networked Systems Design and Implementation (NSDI 20), pp. 723–737 (2020)

    Google Scholar 

Download references

Acknowledgements

Please place your acknowledgments at the end of the paper, preceded by an unnumbered run-in heading (i.e. 3rd-level heading).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhiwen Xiao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tao, T., Xiao, Z., Wang, J., Shang, J., Wu, Z. (2024). THRCache: DRAM-NVM Multi-level Cache with Thresholded Heterogeneous Random Choices. In: Tari, Z., Li, K., Wu, H. (eds) Algorithms and Architectures for Parallel Processing. ICA3PP 2023. Lecture Notes in Computer Science, vol 14490. Springer, Singapore. https://doi.org/10.1007/978-981-97-0859-8_26

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-0859-8_26

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-0858-1

  • Online ISBN: 978-981-97-0859-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics