Ziye Yang, Changpeng Liu, Yanbo Zhou, Xiaodong Liu, Gang Cao
{"title":"SPDK Vhost-NVMe:通过用户空间Vhost目标加速NVMe ssd上虚拟机的I/ o","authors":"Ziye Yang, Changpeng Liu, Yanbo Zhou, Xiaodong Liu, Gang Cao","doi":"10.1109/SC2.2018.00016","DOIUrl":null,"url":null,"abstract":"Nowadays, more and more NVMe SSDs (PCIe SSDs accessed by NVMe protocol) are deployed and virtualized by cloud providers to improve the I/O experience in virtual machines rent by tenants. Though the IOPS and latency for read and write on NVMe SSDs are greatly improved, it seems that the existing software cannot efficiently explore the abilities of those NVMe SSDs, and it is even worse on virtualized platform. There is long I/O stack for applications to access NVMe SSDs in guest VMs, and the overhead of which can be divided into three parts, i.e., (1) I/O execution on emulated NVMe device in guest operating system (OS); (2) Context switch (e.g., VM_Exit) and data movement overhead between guest OS and host OS; (3) I/O execution overhead in host OS on physical NVMe SSDs. To address the long I/O stack issue, we propose SPDK-vhost-NVMe, an I/O service target relying on user space NVMe drivers, which can collaborate with hypervisor to accelerate NVMe I/Os inside VMs. Generally our approach eliminates the unnecessary VM_Exit overhead and also shrinks the I/O execution stack in host OS. Leveraged by SPDK-vhost-NVMe, the performance of storage I/Os in guest OS can be improved. Compared with QEMU native NVMe emulation solution, the best solution SPDK-vhost NVMe has 6X improvement in IOPS and 70% reduction in latency for some read workloads generated by FIO. Also spdk-vhost-NVMe has 5X performance improvement with some db_benchmark test cases (e.g., random read) on RocksDB. Even compared with other optimized SPDK vhost-scsi and vhost-blk solutions, SPDK-vhost-NVMe is also competitive in per core performance aspect.","PeriodicalId":340244,"journal":{"name":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":"{\"title\":\"SPDK Vhost-NVMe: Accelerating I/Os in Virtual Machines on NVMe SSDs via User Space Vhost Target\",\"authors\":\"Ziye Yang, Changpeng Liu, Yanbo Zhou, Xiaodong Liu, Gang Cao\",\"doi\":\"10.1109/SC2.2018.00016\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Nowadays, more and more NVMe SSDs (PCIe SSDs accessed by NVMe protocol) are deployed and virtualized by cloud providers to improve the I/O experience in virtual machines rent by tenants. Though the IOPS and latency for read and write on NVMe SSDs are greatly improved, it seems that the existing software cannot efficiently explore the abilities of those NVMe SSDs, and it is even worse on virtualized platform. There is long I/O stack for applications to access NVMe SSDs in guest VMs, and the overhead of which can be divided into three parts, i.e., (1) I/O execution on emulated NVMe device in guest operating system (OS); (2) Context switch (e.g., VM_Exit) and data movement overhead between guest OS and host OS; (3) I/O execution overhead in host OS on physical NVMe SSDs. To address the long I/O stack issue, we propose SPDK-vhost-NVMe, an I/O service target relying on user space NVMe drivers, which can collaborate with hypervisor to accelerate NVMe I/Os inside VMs. Generally our approach eliminates the unnecessary VM_Exit overhead and also shrinks the I/O execution stack in host OS. Leveraged by SPDK-vhost-NVMe, the performance of storage I/Os in guest OS can be improved. Compared with QEMU native NVMe emulation solution, the best solution SPDK-vhost NVMe has 6X improvement in IOPS and 70% reduction in latency for some read workloads generated by FIO. Also spdk-vhost-NVMe has 5X performance improvement with some db_benchmark test cases (e.g., random read) on RocksDB. Even compared with other optimized SPDK vhost-scsi and vhost-blk solutions, SPDK-vhost-NVMe is also competitive in per core performance aspect.\",\"PeriodicalId\":340244,\"journal\":{\"name\":\"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)\",\"volume\":\"116 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"17\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SC2.2018.00016\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 8th International Symposium on Cloud and Service Computing (SC2)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SC2.2018.00016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SPDK Vhost-NVMe: Accelerating I/Os in Virtual Machines on NVMe SSDs via User Space Vhost Target
Nowadays, more and more NVMe SSDs (PCIe SSDs accessed by NVMe protocol) are deployed and virtualized by cloud providers to improve the I/O experience in virtual machines rent by tenants. Though the IOPS and latency for read and write on NVMe SSDs are greatly improved, it seems that the existing software cannot efficiently explore the abilities of those NVMe SSDs, and it is even worse on virtualized platform. There is long I/O stack for applications to access NVMe SSDs in guest VMs, and the overhead of which can be divided into three parts, i.e., (1) I/O execution on emulated NVMe device in guest operating system (OS); (2) Context switch (e.g., VM_Exit) and data movement overhead between guest OS and host OS; (3) I/O execution overhead in host OS on physical NVMe SSDs. To address the long I/O stack issue, we propose SPDK-vhost-NVMe, an I/O service target relying on user space NVMe drivers, which can collaborate with hypervisor to accelerate NVMe I/Os inside VMs. Generally our approach eliminates the unnecessary VM_Exit overhead and also shrinks the I/O execution stack in host OS. Leveraged by SPDK-vhost-NVMe, the performance of storage I/Os in guest OS can be improved. Compared with QEMU native NVMe emulation solution, the best solution SPDK-vhost NVMe has 6X improvement in IOPS and 70% reduction in latency for some read workloads generated by FIO. Also spdk-vhost-NVMe has 5X performance improvement with some db_benchmark test cases (e.g., random read) on RocksDB. Even compared with other optimized SPDK vhost-scsi and vhost-blk solutions, SPDK-vhost-NVMe is also competitive in per core performance aspect.