Topology Design for Data Center Networks Using Deep Reinforcement Learning

Haoran Qi, Zhan Shu, Xiaomin Chen
{"title":"Topology Design for Data Center Networks Using Deep Reinforcement Learning","authors":"Haoran Qi, Zhan Shu, Xiaomin Chen","doi":"10.1109/ICOIN56518.2023.10048955","DOIUrl":null,"url":null,"abstract":"This paper is concerned with the topology design of data center networks (DCNs) for low latency and fewer links using deep reinforcement learning (DRL). Starting from a K-vertex-connected graph, we propose an interactive framework with single-objective and multi-objective DRL agents to learn DCN topologies for given node traffic matrices by choosing link matrices to represent the states and actions as well as using the average shortest path length together with action penalty terms as reward feedback. Comparisons with commonly used DCN topologies are given to show the effectiveness and merits of our method. The results reveal that our learned topologies could achieve lower delay compared with common DCN topologies. Moreover, we believe that the method can be extended to other topology metrics, e.g., throughput, by simply modifying the reward functions.","PeriodicalId":285763,"journal":{"name":"2023 International Conference on Information Networking (ICOIN)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Information Networking (ICOIN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOIN56518.2023.10048955","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper is concerned with the topology design of data center networks (DCNs) for low latency and fewer links using deep reinforcement learning (DRL). Starting from a K-vertex-connected graph, we propose an interactive framework with single-objective and multi-objective DRL agents to learn DCN topologies for given node traffic matrices by choosing link matrices to represent the states and actions as well as using the average shortest path length together with action penalty terms as reward feedback. Comparisons with commonly used DCN topologies are given to show the effectiveness and merits of our method. The results reveal that our learned topologies could achieve lower delay compared with common DCN topologies. Moreover, we believe that the method can be extended to other topology metrics, e.g., throughput, by simply modifying the reward functions.
基于深度强化学习的数据中心网络拓扑设计
本文研究了基于深度强化学习(DRL)的数据中心网络(DCNs)的低延迟和少链路拓扑设计。从k点连接图开始,我们提出了一个单目标和多目标DRL代理的交互框架,通过选择链路矩阵来表示状态和动作,并使用平均最短路径长度和动作惩罚项作为奖励反馈,来学习给定节点流量矩阵的DCN拓扑。与常用的DCN拓扑结构进行了比较,证明了该方法的有效性和优点。结果表明,与普通的DCN拓扑相比,我们的学习拓扑可以实现更低的延迟。此外,我们相信该方法可以扩展到其他拓扑指标,例如,通过简单地修改奖励函数,吞吐量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信