Simmer:基于vGPU的NF链的速率比例调度,减少丢包

A. Chaurasia, Anshuj Garg, B. Raman, Uday Kurkure, Hari Sivaraman, Lan Vu, S. Veeraswamy
{"title":"Simmer:基于vGPU的NF链的速率比例调度,减少丢包","authors":"A. Chaurasia, Anshuj Garg, B. Raman, Uday Kurkure, Hari Sivaraman, Lan Vu, S. Veeraswamy","doi":"10.1145/3545008.3545068","DOIUrl":null,"url":null,"abstract":"Network Function Virtualization (NFV) paradigm offers flexibility, cost benefits, and ease of deployment by decoupling network function from hardware middleboxes. The service function chains (SFC) deployed using the NFV platform require efficient sharing of resources among various network functions in the chain. Graphics Processing Units (GPUs) have been used to improve various network functions’ performance. However, sharing a single GPU among multiple virtualized network functions (virtual machines) in a service function chain has been challenging due to their proprietary hardware and software stack. Earlier GPU architectures had a limitation: a single physical GPU can only be allocated to one virtual machine (VM) and cannot be shared among multiple VMs. The newer GPUs are virtualization-aware (hardware-assisted virtualization) and allow multiple virtual machines to share a single physical GPU. Although virtualization-aware, these GPUs still lack support for custom scheduling policies and do not expose the preemption control to users. When network functions (hosted within virtual machines) with different processing requirements share the same GPU, virtualization-aware GPUs’ default round-robin scheduling mechanism proves to be inefficient, resulting in packet drops and lower throughput. This paper presents Simmer, an efficient mechanism for scheduling a network function service chain on virtualization-aware GPUs. Our scheduling solution considers the processing requirement of NFs in a GPU-based SFC, thus improving overall throughput by up to 29% and reducing the packet drop to zero compared to vanilla setup.","PeriodicalId":360504,"journal":{"name":"Proceedings of the 51st International Conference on Parallel Processing","volume":"291 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Simmer: Rate proportional scheduling to reduce packet drops in vGPU based NF chains\",\"authors\":\"A. Chaurasia, Anshuj Garg, B. Raman, Uday Kurkure, Hari Sivaraman, Lan Vu, S. Veeraswamy\",\"doi\":\"10.1145/3545008.3545068\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Network Function Virtualization (NFV) paradigm offers flexibility, cost benefits, and ease of deployment by decoupling network function from hardware middleboxes. The service function chains (SFC) deployed using the NFV platform require efficient sharing of resources among various network functions in the chain. Graphics Processing Units (GPUs) have been used to improve various network functions’ performance. However, sharing a single GPU among multiple virtualized network functions (virtual machines) in a service function chain has been challenging due to their proprietary hardware and software stack. Earlier GPU architectures had a limitation: a single physical GPU can only be allocated to one virtual machine (VM) and cannot be shared among multiple VMs. The newer GPUs are virtualization-aware (hardware-assisted virtualization) and allow multiple virtual machines to share a single physical GPU. Although virtualization-aware, these GPUs still lack support for custom scheduling policies and do not expose the preemption control to users. When network functions (hosted within virtual machines) with different processing requirements share the same GPU, virtualization-aware GPUs’ default round-robin scheduling mechanism proves to be inefficient, resulting in packet drops and lower throughput. This paper presents Simmer, an efficient mechanism for scheduling a network function service chain on virtualization-aware GPUs. Our scheduling solution considers the processing requirement of NFs in a GPU-based SFC, thus improving overall throughput by up to 29% and reducing the packet drop to zero compared to vanilla setup.\",\"PeriodicalId\":360504,\"journal\":{\"name\":\"Proceedings of the 51st International Conference on Parallel Processing\",\"volume\":\"291 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 51st International Conference on Parallel Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3545008.3545068\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 51st International Conference on Parallel Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3545008.3545068","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

网络功能虚拟化(NFV)范例通过将网络功能与硬件中间件分离,提供了灵活性、成本效益和易于部署。采用NFV平台部署的业务功能链(SFC)要求在业务功能链上的各种网络功能之间实现资源的高效共享。图形处理单元(gpu)已被用于提高各种网络功能的性能。然而,在业务功能链中的多个虚拟化网络功能(虚拟机)之间共享单个GPU一直具有挑战性,因为它们具有专有的硬件和软件堆栈。早期的GPU架构有一个限制:单个物理GPU只能分配给一个虚拟机,不能在多个虚拟机之间共享。较新的GPU是虚拟化感知的(硬件辅助虚拟化),并允许多个虚拟机共享单个物理GPU。尽管支持虚拟化,但这些gpu仍然缺乏对自定义调度策略的支持,并且不向用户公开抢占控制。当具有不同处理需求的网络功能(托管在虚拟机内)共享同一GPU时,虚拟化感知GPU的默认轮询调度机制效率低下,导致丢包和吞吐量降低。本文提出了一种在虚拟化感知gpu上调度网络功能服务链的有效机制Simmer。我们的调度解决方案考虑了基于gpu的SFC中NFs的处理需求,因此与普通设置相比,将总吞吐量提高了29%,并将数据包丢失减少到零。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Simmer: Rate proportional scheduling to reduce packet drops in vGPU based NF chains
Network Function Virtualization (NFV) paradigm offers flexibility, cost benefits, and ease of deployment by decoupling network function from hardware middleboxes. The service function chains (SFC) deployed using the NFV platform require efficient sharing of resources among various network functions in the chain. Graphics Processing Units (GPUs) have been used to improve various network functions’ performance. However, sharing a single GPU among multiple virtualized network functions (virtual machines) in a service function chain has been challenging due to their proprietary hardware and software stack. Earlier GPU architectures had a limitation: a single physical GPU can only be allocated to one virtual machine (VM) and cannot be shared among multiple VMs. The newer GPUs are virtualization-aware (hardware-assisted virtualization) and allow multiple virtual machines to share a single physical GPU. Although virtualization-aware, these GPUs still lack support for custom scheduling policies and do not expose the preemption control to users. When network functions (hosted within virtual machines) with different processing requirements share the same GPU, virtualization-aware GPUs’ default round-robin scheduling mechanism proves to be inefficient, resulting in packet drops and lower throughput. This paper presents Simmer, an efficient mechanism for scheduling a network function service chain on virtualization-aware GPUs. Our scheduling solution considers the processing requirement of NFs in a GPU-based SFC, thus improving overall throughput by up to 29% and reducing the packet drop to zero compared to vanilla setup.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信