FlexBSO: Flexible Block Storage Offload for Datacenters

Vojtech Aschenbrenner, John Shawger, Sadman Sakib
{"title":"FlexBSO: Flexible Block Storage Offload for Datacenters","authors":"Vojtech Aschenbrenner, John Shawger, Sadman Sakib","doi":"arxiv-2409.02381","DOIUrl":null,"url":null,"abstract":"Efficient virtualization of CPU and memory is standardized and mature.\nCapabilities such as Intel VT-x [3] have been added by manufacturers for\nefficient hypervisor support. In contrast, virtualization of a block device and\nits presentation to the virtual machines on the host can be done in multiple\nways. Indeed, hyperscalers develop in-house solutions to improve performance\nand cost-efficiency of their storage solutions for datacenters. Unfortunately,\nthese storage solutions are based on specialized hardware and software which\nare not publicly available. The traditional solution is to expose virtual block\ndevice to the VM through a paravirtualized driver like virtio [2]. virtio\nprovides significantly better performance than real block device driver\nemulation because of host OS and guest OS cooperation. The IO requests are then\nfulfilled by the host OS either with a local block device such as an SSD drive\nor with some form of disaggregated storage over the network like NVMe-oF or\niSCSI. There are three main problems to the traditional solution. 1) Cost. IO\noperations consume host CPU cycles due to host OS involvement. These CPU cycles\nare doing useless work from the application point of view. 2) Inflexibility.\nAny change of the virtualized storage stack requires host OS and/or guest OS\ncooperation and cannot be done silently in production. 3) Performance. IO\noperations are causing recurring VM EXITs to do the transition from non-root\nmode to root mode on the host CPU. This results into excessive IO performance\nimpact. We propose FlexBSO, a hardware-assisted solution, which solves all the\nmentioned issues. Our prototype is based on the publicly available Bluefield-2\nSmartNIC with NVIDIA SNAP support, hence can be deployed without any obstacles.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"19 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Operating Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.02381","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Efficient virtualization of CPU and memory is standardized and mature. Capabilities such as Intel VT-x [3] have been added by manufacturers for efficient hypervisor support. In contrast, virtualization of a block device and its presentation to the virtual machines on the host can be done in multiple ways. Indeed, hyperscalers develop in-house solutions to improve performance and cost-efficiency of their storage solutions for datacenters. Unfortunately, these storage solutions are based on specialized hardware and software which are not publicly available. The traditional solution is to expose virtual block device to the VM through a paravirtualized driver like virtio [2]. virtio provides significantly better performance than real block device driver emulation because of host OS and guest OS cooperation. The IO requests are then fulfilled by the host OS either with a local block device such as an SSD drive or with some form of disaggregated storage over the network like NVMe-oF or iSCSI. There are three main problems to the traditional solution. 1) Cost. IO operations consume host CPU cycles due to host OS involvement. These CPU cycles are doing useless work from the application point of view. 2) Inflexibility. Any change of the virtualized storage stack requires host OS and/or guest OS cooperation and cannot be done silently in production. 3) Performance. IO operations are causing recurring VM EXITs to do the transition from non-root mode to root mode on the host CPU. This results into excessive IO performance impact. We propose FlexBSO, a hardware-assisted solution, which solves all the mentioned issues. Our prototype is based on the publicly available Bluefield-2 SmartNIC with NVIDIA SNAP support, hence can be deployed without any obstacles.
FlexBSO:数据中心的灵活块存储卸载
CPU 和内存的高效虚拟化已经标准化并趋于成熟,制造商还增加了英特尔 VT-x [3] 等功能,以提供高效的管理程序支持。相比之下,块设备的虚拟化及其向主机上虚拟机的展示可以通过多种方式实现。事实上,超大规模企业开发了内部解决方案,以提高数据中心存储解决方案的性能和成本效益。遗憾的是,这些存储解决方案都是基于不公开的专用硬件和软件。传统的解决方案是通过虚拟化驱动程序(如 virtio [2])将虚拟块设备暴露给虚拟机。由于主机操作系统和客户操作系统的合作,虚拟驱动程序的性能明显优于真实块设备驱动。IO 请求由主机操作系统通过本地块设备(如 SSD 驱动器)或某种形式的网络分解存储(如 NVMe-oF 或 iSCSI)来完成。传统解决方案存在三个主要问题。1) 成本。由于主机操作系统的参与,IO 操作会消耗主机 CPU 周期。从应用程序的角度来看,这些 CPU 周期在做无用功。2) 不灵活性。虚拟化存储堆栈的任何更改都需要主机操作系统和/或客户操作系统的配合,无法在生产中悄无声息地完成。3) 性能。IO 操作会导致虚拟机反复退出,以便在主机 CPU 上完成从非 root 模式到 root 模式的转换。这对 IO 性能造成了过大的影响。我们提出的硬件辅助解决方案 FlexBSO 可以解决上述所有问题。我们的原型基于支持英伟达™(NVIDIA®)SNAP 的公开蓝域-2SmartNIC,因此可以毫无障碍地部署。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信