Learning-based Incast Performance Inference in Software-Defined Data Centers

Kokouvi Bénoît Nougnanke, Y. Labit, M. Bruyère, Simone Ferlin Oliveira, U. Aïvodji
{"title":"Learning-based Incast Performance Inference in Software-Defined Data Centers","authors":"Kokouvi Bénoît Nougnanke, Y. Labit, M. Bruyère, Simone Ferlin Oliveira, U. Aïvodji","doi":"10.1109/ICIN51074.2021.9385546","DOIUrl":null,"url":null,"abstract":"Incast traffic is a many-to-one communication pattern used in many applications, including distributed storage, web-search with partition/aggregation design pattern, and MapReduce, commonly in data centers. It is generally composed of short-lived flows that may be queued behind large flows’ packets in congested switches where performance degradation is observed. Smart buffering at the switch level is sensed to mitigate this issue by automatically and dynamically adapting to traffic conditions changes in the highly dynamic data center environment. But for this dynamic and smart butter management to become effectively beneficial for all the traffic, and especially for incast the most critical one, incast performance models that provide insights on how various factors affect it are needed. The literature lacks these types of models. The existing ones are analytical models, which are either tightly coupled with a particular protocol version or specific to certain empirical data. Motivated by this observation, we propose a machine-learning-based incast performance inference. With this prediction capability, smart buffering scheme or other QoS optimization algorithms could anticipate and efficiently optimize system parameters adjustment to achieve optimal performance. Since applying machine learning to networks managed in a distributed fashion is hard, the prediction mechanism will be deployed on an SDN control plane. We could then take advantage of SDN’s centralized global view, its telemetry capabilities, and its management flexibility.","PeriodicalId":347933,"journal":{"name":"2021 24th Conference on Innovation in Clouds, Internet and Networks and Workshops (ICIN)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 24th Conference on Innovation in Clouds, Internet and Networks and Workshops (ICIN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIN51074.2021.9385546","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Incast traffic is a many-to-one communication pattern used in many applications, including distributed storage, web-search with partition/aggregation design pattern, and MapReduce, commonly in data centers. It is generally composed of short-lived flows that may be queued behind large flows’ packets in congested switches where performance degradation is observed. Smart buffering at the switch level is sensed to mitigate this issue by automatically and dynamically adapting to traffic conditions changes in the highly dynamic data center environment. But for this dynamic and smart butter management to become effectively beneficial for all the traffic, and especially for incast the most critical one, incast performance models that provide insights on how various factors affect it are needed. The literature lacks these types of models. The existing ones are analytical models, which are either tightly coupled with a particular protocol version or specific to certain empirical data. Motivated by this observation, we propose a machine-learning-based incast performance inference. With this prediction capability, smart buffering scheme or other QoS optimization algorithms could anticipate and efficiently optimize system parameters adjustment to achieve optimal performance. Since applying machine learning to networks managed in a distributed fashion is hard, the prediction mechanism will be deployed on an SDN control plane. We could then take advantage of SDN’s centralized global view, its telemetry capabilities, and its management flexibility.
软件定义数据中心中基于学习的即时性能推断
即时流量是一种多对一的通信模式,用于许多应用程序,包括分布式存储、带有分区/聚合设计模式的web搜索和MapReduce,通常在数据中心使用。它通常由短时间的流组成,在拥挤的交换机中,这些流可能排在大流数据包的后面,在那里可以观察到性能下降。在高度动态的数据中心环境中,通过自动和动态地适应流量条件的变化,感知交换机级别的智能缓冲来缓解这个问题。但是,为了使这种动态和智能黄油管理有效地造福于所有流量,特别是对于最关键的流量,需要提供各种因素如何影响它的见解的流量性能模型。文献中缺乏这些类型的模型。现有的是分析模型,它们要么与特定的协议版本紧密耦合,要么特定于某些经验数据。基于这一观察,我们提出了一种基于机器学习的即时性能推断。有了这种预测能力,智能缓冲方案或其他QoS优化算法可以预测并有效优化系统参数调整,以达到最优性能。由于将机器学习应用于以分布式方式管理的网络是困难的,因此预测机制将部署在SDN控制平面上。然后,我们可以利用SDN的集中全局视图、遥测功能和管理灵活性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信