提高缓存分区池化NVMe ssd的I/O性能和公平性

IF 2.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Jiaojiao Wu;Li Cai;Zhigang Cai;Fengxiang Zhang;Jianwei Liao
{"title":"提高缓存分区池化NVMe ssd的I/O性能和公平性","authors":"Jiaojiao Wu;Li Cai;Zhigang Cai;Fengxiang Zhang;Jianwei Liao","doi":"10.1109/TCAD.2025.3553778","DOIUrl":null,"url":null,"abstract":"Nonvolatile memory express (NVMe) solid-state drives (SSDs) have become mainstream storage devices in today’s computing systems, due to their high throughput and ultralow latency. It has been observed that the impact of interference among all concurrently running streams (i.e., I/O workloads) on their overall responsiveness differs significantly in multistream SSDs, resulting in unfairness. This article proposes a cache division management scheme built on top of the evenly partition scheme for NVMe SSDs, to enhance I/O responsiveness without consciously sacrificing fairness. To this end, we first build a mathematical model to directly cut portions from the Local cache partitions allocated to concurrently running streams, considering their run-time performance measures. Then, our approach pools these portions together for the use of all streams. As a result, each stream has its corresponding Local cache space for ensuring fairness, meanwhile the pooled Global cache space is shared by all streams for enhancing I/O responsiveness. Trace-driven simulation experiments demonstrate that our proposal reduces the overall I/O latency by up to <monospace>24.4</monospace>%, and improve the measure of fairness by <inline-formula> <tex-math>$\\mathtt{2.5}\\times $ </tex-math></inline-formula> on average, in contrast to existing cache management schemes for NVMe SSDs.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"44 10","pages":"3710-3723"},"PeriodicalIF":2.9000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improving I/O Performance and Fairness in NVMe SSDs With Pooling Portions of Cache Partitions\",\"authors\":\"Jiaojiao Wu;Li Cai;Zhigang Cai;Fengxiang Zhang;Jianwei Liao\",\"doi\":\"10.1109/TCAD.2025.3553778\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Nonvolatile memory express (NVMe) solid-state drives (SSDs) have become mainstream storage devices in today’s computing systems, due to their high throughput and ultralow latency. It has been observed that the impact of interference among all concurrently running streams (i.e., I/O workloads) on their overall responsiveness differs significantly in multistream SSDs, resulting in unfairness. This article proposes a cache division management scheme built on top of the evenly partition scheme for NVMe SSDs, to enhance I/O responsiveness without consciously sacrificing fairness. To this end, we first build a mathematical model to directly cut portions from the Local cache partitions allocated to concurrently running streams, considering their run-time performance measures. Then, our approach pools these portions together for the use of all streams. As a result, each stream has its corresponding Local cache space for ensuring fairness, meanwhile the pooled Global cache space is shared by all streams for enhancing I/O responsiveness. Trace-driven simulation experiments demonstrate that our proposal reduces the overall I/O latency by up to <monospace>24.4</monospace>%, and improve the measure of fairness by <inline-formula> <tex-math>$\\\\mathtt{2.5}\\\\times $ </tex-math></inline-formula> on average, in contrast to existing cache management schemes for NVMe SSDs.\",\"PeriodicalId\":13251,\"journal\":{\"name\":\"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems\",\"volume\":\"44 10\",\"pages\":\"3710-3723\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2025-03-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10937118/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10937118/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

非易失性存储器(NVMe)固态硬盘(ssd)由于其高吞吐量和超低延迟,已成为当今计算系统的主流存储设备。据观察,在多流ssd中,所有并发运行流(即I/O工作负载)之间的干扰对其总体响应性的影响差异很大,导致不公平。本文提出了一种基于NVMe ssd均匀分区方案的缓存分区管理方案,在不牺牲公平性的前提下提高I/O响应能力。为此,我们首先构建一个数学模型,考虑到并发运行流的运行时性能指标,直接从本地缓存分区中分割分配给并发运行流的部分。然后,我们的方法将这些部分汇集在一起,供所有流使用。因此,每个流都有相应的本地缓存空间,以确保公平性,同时,所有流共享池中的全局缓存空间,以提高I/O响应能力。跟踪驱动的仿真实验表明,与现有的NVMe ssd缓存管理方案相比,我们的提议将总体I/O延迟降低了24.4%,并将公平度量平均提高了$\mathtt{2.5}\ $。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Improving I/O Performance and Fairness in NVMe SSDs With Pooling Portions of Cache Partitions
Nonvolatile memory express (NVMe) solid-state drives (SSDs) have become mainstream storage devices in today’s computing systems, due to their high throughput and ultralow latency. It has been observed that the impact of interference among all concurrently running streams (i.e., I/O workloads) on their overall responsiveness differs significantly in multistream SSDs, resulting in unfairness. This article proposes a cache division management scheme built on top of the evenly partition scheme for NVMe SSDs, to enhance I/O responsiveness without consciously sacrificing fairness. To this end, we first build a mathematical model to directly cut portions from the Local cache partitions allocated to concurrently running streams, considering their run-time performance measures. Then, our approach pools these portions together for the use of all streams. As a result, each stream has its corresponding Local cache space for ensuring fairness, meanwhile the pooled Global cache space is shared by all streams for enhancing I/O responsiveness. Trace-driven simulation experiments demonstrate that our proposal reduces the overall I/O latency by up to 24.4%, and improve the measure of fairness by $\mathtt{2.5}\times $ on average, in contrast to existing cache management schemes for NVMe SSDs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.60
自引率
13.80%
发文量
500
审稿时长
7 months
期刊介绍: The purpose of this Transactions is to publish papers of interest to individuals in the area of computer-aided design of integrated circuits and systems composed of analog, digital, mixed-signal, optical, or microwave components. The aids include methods, models, algorithms, and man-machine interfaces for system-level, physical and logical design including: planning, synthesis, partitioning, modeling, simulation, layout, verification, testing, hardware-software co-design and documentation of integrated circuit and system designs of all complexities. Design tools and techniques for evaluating and designing integrated circuits and systems for metrics such as performance, power, reliability, testability, and security are a focus.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信