Rongxin Han, Deliang Chen, Song Guo, Xiaoyuan Fu, Jingyu Wang, Q. Qi, J. Liao
{"title":"Parallel Network Slicing for Multi-SP Services","authors":"Rongxin Han, Deliang Chen, Song Guo, Xiaoyuan Fu, Jingyu Wang, Q. Qi, J. Liao","doi":"10.1145/3545008.3545070","DOIUrl":null,"url":null,"abstract":"Network slicing is rapidly prevailing in edge cloud, which provides computing, network and storage resources for various services. When the multiple service providers (SPs) respond to their tenants in parallel, individual decisions on the dynamic and shared edge cloud may lead to resource conflicts. The resource conflicts problem can be formulated as a multi-objective constrained optimization model; however, it is challenging to solve it due to the complexity of resource interactions caused by co-existing multi-SP policies. Therefore, we propose a CommDRL scheme based on multi-agent deep reinforcement learning (MADRL) and multi-agent communication to tackle the challenge. CommDRL can coordinate network resources between SPs with less overhead. Moreover, we design the neurons hotplugging learning in CommDRL to deal with dynamic edge cloud, which realizes scalability without a high cost of model retraining. Experiments demonstrate that CommDRL can successfully obtain deployment policies and easily adapt to various network scales. It improves the accepted requests by 7.4%, reduces resource conflicts by 14.5%, and shortens the model convergence time by 83.3%.","PeriodicalId":360504,"journal":{"name":"Proceedings of the 51st International Conference on Parallel Processing","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 51st International Conference on Parallel Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3545008.3545070","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Network slicing is rapidly prevailing in edge cloud, which provides computing, network and storage resources for various services. When the multiple service providers (SPs) respond to their tenants in parallel, individual decisions on the dynamic and shared edge cloud may lead to resource conflicts. The resource conflicts problem can be formulated as a multi-objective constrained optimization model; however, it is challenging to solve it due to the complexity of resource interactions caused by co-existing multi-SP policies. Therefore, we propose a CommDRL scheme based on multi-agent deep reinforcement learning (MADRL) and multi-agent communication to tackle the challenge. CommDRL can coordinate network resources between SPs with less overhead. Moreover, we design the neurons hotplugging learning in CommDRL to deal with dynamic edge cloud, which realizes scalability without a high cost of model retraining. Experiments demonstrate that CommDRL can successfully obtain deployment policies and easily adapt to various network scales. It improves the accepted requests by 7.4%, reduces resource conflicts by 14.5%, and shortens the model convergence time by 83.3%.