{"title":"LAC:面向高性能固态硬盘的工作负载强度感知缓存方案","authors":"Hui Sun;Haoqiang Tong;Yinliang Yue;Xiao Qin","doi":"10.1109/TC.2024.3385290","DOIUrl":null,"url":null,"abstract":"Inside an NAND Flash-based solid-state disk (SSD), utilizing DRAM-based write-back caching is a practical approach to bolstering the SSD performance. Existing caching schemes overlook the problem of high user I/Os intensity due to the dramatic increment of I/Os accesses. The hefty I/O intensity causes access conflict of I/O requests inside an SSD: a large number of requests are blocked to impair response time. Conventional passive update caching schemes merely replace pages upon access misses in event of full cache. Tail latency occurs facing a colossal I/O intensity. Active write-back caching schemes utilize idle time among requests coupled with free internal bandwidth to flush dirty data into flash memory in advance, lowering response time. Frequent active write-back operations, however, cause access conflict of requests – a culprit that expands write amplification (WA) and degrades SSD lifetime. We address the above issues by proposing a \n<italic>work<b>L</b></i>\noad intensity-aware and \n<bold><i>A</i></b>\nctive parallel \n<bold><i>Caching</i></b>\n scheme - LAC - that is powered by collaborative-load awareness. LAC fends off user I/Os’ access conflict under high-I/O-intensity workloads. If the I/O intensity is low – intervals between consecutive I/O requests are large – and the target die is free, LAC actively and concurrently writes dirty data of adjacent addresses back to the die, cultivating clean data generated by the active write-back. Replacing clean data in priority can reduce response time and prevent flash transactions from being blocked. We devise a data protection method to write back cold data based on various criteria in the cache replacement and active write-backs. Thus, LAC reduces WA incurred by actively writing back hot data and extends SSD lifetime. We compare LAC against the six caching schemes (LRU, CFLRU, GCaR-LRU, MQSim, VS-Batch, and Co-Active) in the modern MQSim simulator. The results unveil that LAC trims response time and erase count by up to 78.5% and 47.8%, with an average of 64.4% and 16.6%, respectively.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 7","pages":"1738-1752"},"PeriodicalIF":3.6000,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LAC: A Workload Intensity-Aware Caching Scheme for High-Performance SSDs\",\"authors\":\"Hui Sun;Haoqiang Tong;Yinliang Yue;Xiao Qin\",\"doi\":\"10.1109/TC.2024.3385290\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Inside an NAND Flash-based solid-state disk (SSD), utilizing DRAM-based write-back caching is a practical approach to bolstering the SSD performance. Existing caching schemes overlook the problem of high user I/Os intensity due to the dramatic increment of I/Os accesses. The hefty I/O intensity causes access conflict of I/O requests inside an SSD: a large number of requests are blocked to impair response time. Conventional passive update caching schemes merely replace pages upon access misses in event of full cache. Tail latency occurs facing a colossal I/O intensity. Active write-back caching schemes utilize idle time among requests coupled with free internal bandwidth to flush dirty data into flash memory in advance, lowering response time. Frequent active write-back operations, however, cause access conflict of requests – a culprit that expands write amplification (WA) and degrades SSD lifetime. We address the above issues by proposing a \\n<italic>work<b>L</b></i>\\noad intensity-aware and \\n<bold><i>A</i></b>\\nctive parallel \\n<bold><i>Caching</i></b>\\n scheme - LAC - that is powered by collaborative-load awareness. LAC fends off user I/Os’ access conflict under high-I/O-intensity workloads. If the I/O intensity is low – intervals between consecutive I/O requests are large – and the target die is free, LAC actively and concurrently writes dirty data of adjacent addresses back to the die, cultivating clean data generated by the active write-back. Replacing clean data in priority can reduce response time and prevent flash transactions from being blocked. We devise a data protection method to write back cold data based on various criteria in the cache replacement and active write-backs. Thus, LAC reduces WA incurred by actively writing back hot data and extends SSD lifetime. We compare LAC against the six caching schemes (LRU, CFLRU, GCaR-LRU, MQSim, VS-Batch, and Co-Active) in the modern MQSim simulator. The results unveil that LAC trims response time and erase count by up to 78.5% and 47.8%, with an average of 64.4% and 16.6%, respectively.\",\"PeriodicalId\":13087,\"journal\":{\"name\":\"IEEE Transactions on Computers\",\"volume\":\"73 7\",\"pages\":\"1738-1752\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2024-04-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Computers\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10492468/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computers","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10492468/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
LAC: A Workload Intensity-Aware Caching Scheme for High-Performance SSDs
Inside an NAND Flash-based solid-state disk (SSD), utilizing DRAM-based write-back caching is a practical approach to bolstering the SSD performance. Existing caching schemes overlook the problem of high user I/Os intensity due to the dramatic increment of I/Os accesses. The hefty I/O intensity causes access conflict of I/O requests inside an SSD: a large number of requests are blocked to impair response time. Conventional passive update caching schemes merely replace pages upon access misses in event of full cache. Tail latency occurs facing a colossal I/O intensity. Active write-back caching schemes utilize idle time among requests coupled with free internal bandwidth to flush dirty data into flash memory in advance, lowering response time. Frequent active write-back operations, however, cause access conflict of requests – a culprit that expands write amplification (WA) and degrades SSD lifetime. We address the above issues by proposing a
workL
oad intensity-aware and
A
ctive parallel
Caching
scheme - LAC - that is powered by collaborative-load awareness. LAC fends off user I/Os’ access conflict under high-I/O-intensity workloads. If the I/O intensity is low – intervals between consecutive I/O requests are large – and the target die is free, LAC actively and concurrently writes dirty data of adjacent addresses back to the die, cultivating clean data generated by the active write-back. Replacing clean data in priority can reduce response time and prevent flash transactions from being blocked. We devise a data protection method to write back cold data based on various criteria in the cache replacement and active write-backs. Thus, LAC reduces WA incurred by actively writing back hot data and extends SSD lifetime. We compare LAC against the six caching schemes (LRU, CFLRU, GCaR-LRU, MQSim, VS-Batch, and Co-Active) in the modern MQSim simulator. The results unveil that LAC trims response time and erase count by up to 78.5% and 47.8%, with an average of 64.4% and 16.6%, respectively.
期刊介绍:
The IEEE Transactions on Computers is a monthly publication with a wide distribution to researchers, developers, technical managers, and educators in the computer field. It publishes papers on research in areas of current interest to the readers. These areas include, but are not limited to, the following: a) computer organizations and architectures; b) operating systems, software systems, and communication protocols; c) real-time systems and embedded systems; d) digital devices, computer components, and interconnection networks; e) specification, design, prototyping, and testing methods and tools; f) performance, fault tolerance, reliability, security, and testability; g) case studies and experimental and theoretical evaluations; and h) new and important applications and trends.