{"title":"DRAM Cache Management with Request Granularity for NAND-based SSDs","authors":"Haodong Lin, Zhibing Sha, Jun Li, Zhigang Cai, Balazs Gerofi, Yuanquan Shi, Jianwei Liao","doi":"10.1145/3545008.3545081","DOIUrl":null,"url":null,"abstract":"Most flash-based solid-state drives (SSDs) employ an on-board Dynamic Random Access Memory (DRAM) to cache hot data at the SSD page granularity. This can significantly reduce the number of flush operations to the underlying arrays of SSDs given that there is sufficient locality in the applications’ I/O access pattern. We observe, however, that in most I/O workloads over SSDs the buffered data of small sized requests are more likely to be re-accessed than those of larger requests, which also require more DRAM space for caching their data. To improve the efficiency of the DRAM cache inside SSDs, this paper presents a request granularity-based cache management scheme, called Req-block. The proposed mechanism manages cached data according to the size of write requests and supports multi-level linked lists for sifting the cached data blocks (termed as request blocks), by taking both their size and hotness into account. Comprehensive evaluation shows that our proposal improves cache hits by up to 90.5%, and decreases I/O latency by 14.3% on average, compared to existing state-of-the-art SSD cache management schemes.","PeriodicalId":360504,"journal":{"name":"Proceedings of the 51st International Conference on Parallel Processing","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 51st International Conference on Parallel Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3545008.3545081","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Most flash-based solid-state drives (SSDs) employ an on-board Dynamic Random Access Memory (DRAM) to cache hot data at the SSD page granularity. This can significantly reduce the number of flush operations to the underlying arrays of SSDs given that there is sufficient locality in the applications’ I/O access pattern. We observe, however, that in most I/O workloads over SSDs the buffered data of small sized requests are more likely to be re-accessed than those of larger requests, which also require more DRAM space for caching their data. To improve the efficiency of the DRAM cache inside SSDs, this paper presents a request granularity-based cache management scheme, called Req-block. The proposed mechanism manages cached data according to the size of write requests and supports multi-level linked lists for sifting the cached data blocks (termed as request blocks), by taking both their size and hotness into account. Comprehensive evaluation shows that our proposal improves cache hits by up to 90.5%, and decreases I/O latency by 14.3% on average, compared to existing state-of-the-art SSD cache management schemes.