Loading [MathJax]/extensions/TeX/extpfeil.js
Dynamic Cache Partitioning for Enhancing Parallel I/O Performance in NVMe SSDs | IEEE Conference Publication | IEEE Xplore

Dynamic Cache Partitioning for Enhancing Parallel I/O Performance in NVMe SSDs


Abstract:

Solid State Drive cache, implemented as on-board shared DRAM memory, can significantly enhance 110 performance by caching frequently accessed data. Although SSD caching s...Show More

Abstract:

Solid State Drive cache, implemented as on-board shared DRAM memory, can significantly enhance 110 performance by caching frequently accessed data. Although SSD caching strategies for single 110 data flows have been extensively explored, studies on cache partitioning to optimize parallel 110 in an SSD are scarce. In this paper, we present a novel dynamic cache partitioning approach designed to improve overall performance of multi-parallel 110 data flows by minimizing per-formance degradation of cache pollution and resource contention. By dynamically adjusting cache partition sizes for each data flow by considering cache sensitivity on performance, our strategy seeks to determine the optimal cache partition sizes to maximize overall 110 throughput. We implemented the strategy in the SSD simulator MQSim and evaluated its performance using various synthetic and real-world workloads. Our experimental results indicate that our dynamic cache partitioning strategy achieves an overall throughput increase of up to 33.22 % compared to shared cache methods and outperforms static cache partitioning strategies by up to 21.19%.
Date of Conference: 09-11 November 2024
Date Added to IEEE Xplore: 12 December 2024
ISBN Information:

ISSN Information:

Conference Location: Zhuhai, China

References

References is not available for this document.