{"title":"Robust Deep Learning for Wireless Network Optimization","authors":"Shuai Zhang, Bo Yin, Suyang Wang, Y. Cheng","doi":"10.1109/ICC40277.2020.9149445","DOIUrl":null,"url":null,"abstract":"Wireless optimization involves repeatedly solving difficult optimization problems, and data-driven deep learning techniques have great promise to alleviate this issue through its pattern matching capability: past optimal solutions can be used as the training data in a supervised learning paradigm so that the neural network can generate an approximate solution using a fraction of the computational cost, due to its high representing power and parallel implementation. However, making this approach practical in networking scenarios requires careful, domain-specific consideration, currently lacking in similar works. In this paper, we use deep learning in a wireless network scheduling and routing to predict if subsets of the network links are going to be used, so that the effective problem scale is reduced. A real-world concern is the varying data importance: training samples are not equally important due to class imbalance or different label quality. To compensate for this fact, we develop an adaptive sample weighting scheme which dynamically weights the batch samples in the training process. In addition, we design a novel loss function that uses additional network-layer feature information to improve the solution quality. We also discuss a post-processing step that gives a good threshold value to balance the trade-off between prediction quality and problem scale reduction. By numerical simulations, we demonstrate that these measures improve both the prediction quality and scale reduction when training from data of varied importance.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICC40277.2020.9149445","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Wireless optimization involves repeatedly solving difficult optimization problems, and data-driven deep learning techniques have great promise to alleviate this issue through its pattern matching capability: past optimal solutions can be used as the training data in a supervised learning paradigm so that the neural network can generate an approximate solution using a fraction of the computational cost, due to its high representing power and parallel implementation. However, making this approach practical in networking scenarios requires careful, domain-specific consideration, currently lacking in similar works. In this paper, we use deep learning in a wireless network scheduling and routing to predict if subsets of the network links are going to be used, so that the effective problem scale is reduced. A real-world concern is the varying data importance: training samples are not equally important due to class imbalance or different label quality. To compensate for this fact, we develop an adaptive sample weighting scheme which dynamically weights the batch samples in the training process. In addition, we design a novel loss function that uses additional network-layer feature information to improve the solution quality. We also discuss a post-processing step that gives a good threshold value to balance the trade-off between prediction quality and problem scale reduction. By numerical simulations, we demonstrate that these measures improve both the prediction quality and scale reduction when training from data of varied importance.