SPDK Vhost-NVMe: Accelerating I/Os in Virtual Machines on NVMe SSDs via User Space Vhost Target

Ziye Yang, Changpeng Liu, Yanbo Zhou, Xiaodong Liu, Gang Cao
{"title":"SPDK Vhost-NVMe: Accelerating I/Os in Virtual Machines on NVMe SSDs via User Space Vhost Target","authors":"Ziye Yang, Changpeng Liu, Yanbo Zhou, Xiaodong Liu, Gang Cao","doi":"10.1109/SC2.2018.00016","DOIUrl":null,"url":null,"abstract":"Nowadays, more and more NVMe SSDs (PCIe SSDs accessed by NVMe protocol) are deployed and virtualized by cloud providers to improve the I/O experience in virtual machines rent by tenants. Though the IOPS and latency for read and write on NVMe SSDs are greatly improved, it seems that the existing software cannot efficiently explore the abilities of those NVMe SSDs, and it is even worse on virtualized platform. There is long I/O stack for applications to access NVMe SSDs in guest VMs, and the overhead of which can be divided into three parts, i.e., (1) I/O execution on emulated NVMe device in guest operating system (OS); (2) Context switch (e.g., VM_Exit) and data movement overhead between guest OS and host OS; (3) I/O execution overhead in host OS on physical NVMe SSDs. To address the long I/O stack issue, we propose SPDK-vhost-NVMe, an I/O service target relying on user space NVMe drivers, which can collaborate with hypervisor to accelerate NVMe I/Os inside VMs. Generally our approach eliminates the unnecessary VM_Exit overhead and also shrinks the I/O execution stack in host OS. Leveraged by SPDK-vhost-NVMe, the performance of storage I/Os in guest OS can be improved. Compared with QEMU native NVMe emulation solution, the best solution SPDK-vhost NVMe has 6X improvement in IOPS and 70% reduction in latency for some read workloads generated by FIO. Also spdk-vhost-NVMe has 5X performance improvement with some db_benchmark test cases (e.g., random read) on RocksDB. Even compared with other optimized SPDK vhost-scsi and vhost-blk solutions, SPDK-vhost-NVMe is also competitive in per core performance aspect.","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SC2.2018.00016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 17

Abstract

Nowadays, more and more NVMe SSDs (PCIe SSDs accessed by NVMe protocol) are deployed and virtualized by cloud providers to improve the I/O experience in virtual machines rent by tenants. Though the IOPS and latency for read and write on NVMe SSDs are greatly improved, it seems that the existing software cannot efficiently explore the abilities of those NVMe SSDs, and it is even worse on virtualized platform. There is long I/O stack for applications to access NVMe SSDs in guest VMs, and the overhead of which can be divided into three parts, i.e., (1) I/O execution on emulated NVMe device in guest operating system (OS); (2) Context switch (e.g., VM_Exit) and data movement overhead between guest OS and host OS; (3) I/O execution overhead in host OS on physical NVMe SSDs. To address the long I/O stack issue, we propose SPDK-vhost-NVMe, an I/O service target relying on user space NVMe drivers, which can collaborate with hypervisor to accelerate NVMe I/Os inside VMs. Generally our approach eliminates the unnecessary VM_Exit overhead and also shrinks the I/O execution stack in host OS. Leveraged by SPDK-vhost-NVMe, the performance of storage I/Os in guest OS can be improved. Compared with QEMU native NVMe emulation solution, the best solution SPDK-vhost NVMe has 6X improvement in IOPS and 70% reduction in latency for some read workloads generated by FIO. Also spdk-vhost-NVMe has 5X performance improvement with some db_benchmark test cases (e.g., random read) on RocksDB. Even compared with other optimized SPDK vhost-scsi and vhost-blk solutions, SPDK-vhost-NVMe is also competitive in per core performance aspect.
SPDK Vhost-NVMe:通过用户空间Vhost目标加速NVMe ssd上虚拟机的I/ o
如今,越来越多的NVMe ssd(通过NVMe协议访问的PCIe ssd)被云提供商部署和虚拟化,以改善租户租用虚拟机的I/O体验。虽然NVMe ssd上的IOPS和读写延迟有了很大的提高,但现有的软件似乎无法有效地挖掘NVMe ssd的能力,在虚拟化平台上更是如此。应用程序访问来宾虚拟机中的NVMe ssd有很长的I/O堆栈,其开销可以分为三个部分,即(1)在来宾操作系统(OS)中模拟NVMe设备上的I/O执行;(2)上下文切换(例如,VM_Exit)和客户操作系统和主机操作系统之间的数据移动开销;(3)主机操作系统对物理NVMe ssd的I/O执行开销。为了解决长I/O堆栈问题,我们提出了SPDK-vhost-NVMe,一个依赖于用户空间NVMe驱动程序的I/O服务目标,它可以与管理程序协作以加速虚拟机内的NVMe I/O。通常,我们的方法消除了不必要的VM_Exit开销,并且还缩小了主机操作系统中的I/O执行堆栈。利用SPDK-vhost-NVMe,可以提高客户机操作系统中存储I/ o的性能。与QEMU原生NVMe仿真方案相比,最佳方案SPDK-vhost NVMe在处理部分由fifo产生的读工作负载时,IOPS提高了6倍,时延降低了70%。此外,在RocksDB上,spdk-vhost-NVMe在一些db_benchmark测试用例(例如随机读取)上有5倍的性能提升。即使与其他优化的SPDK vhost-scsi和vhost- block解决方案相比,SPDK-vhost- nvme在每核性能方面也具有竞争力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信