Abstract:
Since cloud service providers adopt the sharing mode to improve the utilization of solid state drives (SSDs) and reduce management costs, fairness is a critical design co...Show MoreMetadata
Abstract:
Since cloud service providers adopt the sharing mode to improve the utilization of solid state drives (SSDs) and reduce management costs, fairness is a critical design consideration and has drawn great interest in recent years. There are many methods to achieve fairness at the SSD device level, including cache-based resource allocation and queue rescheduling of transaction scheduling unit (TSU). However, poor locality data in cloud environments makes these methods of achieving fairness through the cache level invalid. In addition, existing fairness approaches at the TSU level treat flash memory as a black box and do not consider some characteristics of flash memory, leading to room for performance improvement. In this work, we propose a novel collaborative SSD fairness scheme, named a coordinated SSD cache and TSU fair scheme (CoFS), that achieves end-to-end full path fairness at the device level. A reinforcement learning-assisted fairness management scheme is designed to coordinate the SSD cache and TSU considering both the cache space and bandwidth resources which are significant for fairness control. The key idea is to enable the front-end SSD cache to achieve workload-level fairness by recognizing workload patterns, while the back-end TSU achieves flash-level fairness by sensing SSD internal status. Finally, CoFS collaborates with them to achieve SSD device-level fairness. In addition, we have designed a flexible reward function mechanism in the cache to balance different optimization objectives and augment TSU queue scheduling to adapt to different types of SSDs. Experimental results show that CoFS improves the overall fairness and performance by 30.8% to 56.7% and 42.1% to 98.7% in different scenarios.
Published in: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems ( Volume: 43, Issue: 12, December 2024)