IEEE Transactions on Machine Learning in Communications and Networking最新文献

筛选
英文 中文
IEEE Communications Society Board of Governors 电气和电子工程师学会通信协会理事会
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-02-23 DOI: 10.1109/TMLCN.2024.3366609
{"title":"IEEE Communications Society Board of Governors","authors":"","doi":"10.1109/TMLCN.2024.3366609","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3366609","url":null,"abstract":"","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10443923","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139942648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Outage Performance and Novel Loss Function for an ML-Assisted Resource Allocation: An Exact Analytical Framework ML 辅助资源分配的中断性能和新损失函数:精确分析框架
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-02-22 DOI: 10.1109/TMLCN.2024.3369007
Nidhi Simmons;David E. Simmons;Michel Daoud Yacoub
{"title":"Outage Performance and Novel Loss Function for an ML-Assisted Resource Allocation: An Exact Analytical Framework","authors":"Nidhi Simmons;David E. Simmons;Michel Daoud Yacoub","doi":"10.1109/TMLCN.2024.3369007","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3369007","url":null,"abstract":"In this paper, we present Machine Learning (ML) solutions to address the reliability challenges likely to be encountered in advanced wireless systems (5G, 6G, and indeed beyond). Specifically, we introduce a novel loss function to minimize the outage probability of an ML-based resource allocation system. A single-user multi-resource greedy allocation strategy constitutes our application scenario, for which an ML binary classification predictor assists in selecting a resource satisfying the established outage criterium. While other resource allocation policies may be suitable, they are not the focus of our study. Instead, our primary emphasis is on theoretically developing this loss function and leveraging it to train an ML model to address the outage probability challenge. With no access to future channel state information, this predictor foresees each resource’s likely future outage status. When the predictor encounters a resource it believes will be satisfactory, it allocates it to the user. The predictor aims to ensure that a user avoids resources likely to undergo an outage. Our main result establishes exact and asymptotic expressions for this system’s outage probability. These expressions reveal that focusing solely on the optimization of the per-resource outage probability conditioned on the ML predictor recommending resource allocation (a strategy that - at face value - looks to be the most appropriate) may produce inadequate predictors that reject every resource. They also reveal that focusing on standard metrics, like precision, false-positive rate, or recall, may not produce optimal predictors. With our result, we formulate a theoretically optimal, differentiable loss function to train our predictor. We then compare predictors trained using this and traditional loss functions namely, binary cross-entropy (BCE), mean squared error (MSE), and mean absolute error (MAE). In all scenarios, predictors trained using our novel loss function provide superior outage probability performance. Moreover, in some cases, our loss function outperforms predictors trained with BCE, MAE, and MSE by multiple orders of magnitude. Additionally, when applied to another ML-based resource allocation scheme (a modified greedy algorithm), our proposed loss function maintains its efficacy.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"335-350"},"PeriodicalIF":0.0,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10443669","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140042854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Learning Generalized Wireless MAC Communication Protocols via a Feasible Multi-Agent Reinforcement Learning Framework 论通过可行的多代理强化学习框架学习通用无线 MAC 通信协议
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-02-20 DOI: 10.1109/TMLCN.2024.3368367
Luciano Miuccio;Salvatore Riolo;Sumudu Samarakoon;Mehdi Bennis;Daniela Panno
{"title":"On Learning Generalized Wireless MAC Communication Protocols via a Feasible Multi-Agent Reinforcement Learning Framework","authors":"Luciano Miuccio;Salvatore Riolo;Sumudu Samarakoon;Mehdi Bennis;Daniela Panno","doi":"10.1109/TMLCN.2024.3368367","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3368367","url":null,"abstract":"Automatically learning medium access control (MAC) communication protocols via multi-agent reinforcement learning (MARL) has received huge attention to cater to the extremely diverse real-world scenarios expected in 6G wireless networks. Several state-of-the-art solutions adopt the centralized training with decentralized execution (CTDE) learning method, where agents learn optimal MAC protocols by exploiting the information exchanged with a central unit. Despite the promising results achieved in these works, two notable challenges are neglected. First, these works were designed to be trained in computer simulations assuming an omniscient environment and neglecting communication overhead issues, thus making the implementation impractical in real-world scenarios. Second, the learned protocols fail to generalize outside of the scenario they were trained on. In this paper, we propose a new feasible learning framework that enables practical implementations of training procedures, thus allowing learned MAC protocols to be tailor-made for the scenario where they will be executed. Moreover, to address the second challenge, we leverage the concept of state abstraction and imbue it into the MARL framework for better generalization. As a result, the policies are learned in an abstracted observation space that contains only useful information extracted from the original high-dimensional and redundant observation space. Simulation results show that our feasible learning framework exhibits performance comparable to that of the infeasible solutions. In addition, the learning frameworks adopting observation abstraction offer better generalization capabilities, in terms of the number of UEs, number of data packets to transmit, and channel conditions.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"298-317"},"PeriodicalIF":0.0,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10440615","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140000519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Getting the Best Out of Both Worlds: Algorithms for Hierarchical Inference at the Edge 两全其美:边缘分层推理算法
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-02-14 DOI: 10.1109/TMLCN.2024.3366501
Vishnu Narayanan Moothedath;Jaya Prakash Champati;James Gross
{"title":"Getting the Best Out of Both Worlds: Algorithms for Hierarchical Inference at the Edge","authors":"Vishnu Narayanan Moothedath;Jaya Prakash Champati;James Gross","doi":"10.1109/TMLCN.2024.3366501","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3366501","url":null,"abstract":"We consider a resource-constrained Edge Device (ED), such as an IoT sensor or a microcontroller unit, embedded with a small-size ML model (S-ML) for a generic classification application and an Edge Server (ES) that hosts a large-size ML model (L-ML). Since the inference accuracy of S-ML is lower than that of the L-ML, offloading all the data samples to the ES results in high inference accuracy, but it defeats the purpose of embedding S-ML on the ED and deprives the benefits of reduced latency, bandwidth savings, and energy efficiency of doing local inference. In order to get the best out of both worlds, i.e., the benefits of doing inference on the ED and the benefits of doing inference on ES, we explore the idea of Hierarchical Inference (HI), wherein S-ML inference is only accepted when it is correct, otherwise the data sample is offloaded for L-ML inference. However, the ideal implementation of HI is infeasible as the correctness of the S-ML inference is not known to the ED. We thus propose an online meta-learning framework that the ED can use to predict the correctness of the S-ML inference. In particular, we propose to use the probability corresponding to the maximum probability class output by S-ML for a data sample and decide whether to offload it or not. The resulting online learning problem turns out to be a Prediction with Expert Advice (PEA) problem with continuous expert space. For a full feedback scenario, where the ED receives feedback on the correctness of the S-ML once it accepts the inference, we propose the HIL-F algorithm and prove a sublinear regret bound \u0000<inline-formula> <tex-math>$sqrt {nln (1/lambda _{text {min}})/2}$ </tex-math></inline-formula>\u0000 without any assumption on the smoothness of the loss function, where \u0000<inline-formula> <tex-math>$n$ </tex-math></inline-formula>\u0000 is the number of data samples and \u0000<inline-formula> <tex-math>$lambda _{text {min}}$ </tex-math></inline-formula>\u0000 is the minimum difference between any two distinct maximum probability values across the data samples. For a no-local feedback scenario, where the ED does not receive the ground truth for the classification, we propose the HIL-N algorithm and prove that it has \u0000<inline-formula> <tex-math>$Oleft ({n^{2/{3}}ln ^{1/{3}}(1/lambda _{text {min}})}right)$ </tex-math></inline-formula>\u0000 regret bound. We evaluate and benchmark the performance of the proposed algorithms for image classification application using four datasets, namely, Imagenette and Imagewoof, MNIST, and CIFAR-10.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"280-297"},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10436693","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139937152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stealthy Adversarial Attacks on Machine Learning-Based Classifiers of Wireless Signals 对基于机器学习的无线信号分类器的隐蔽性对抗攻击
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-02-13 DOI: 10.1109/TMLCN.2024.3366161
Wenhan Zhang;Marwan Krunz;Gregory Ditzler
{"title":"Stealthy Adversarial Attacks on Machine Learning-Based Classifiers of Wireless Signals","authors":"Wenhan Zhang;Marwan Krunz;Gregory Ditzler","doi":"10.1109/TMLCN.2024.3366161","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3366161","url":null,"abstract":"Machine learning (ML) has been successfully applied to classification tasks in many domains, including computer vision, cybersecurity, and communications. Although highly accurate classifiers have been developed, research shows that these classifiers are, in general, vulnerable to adversarial machine learning (AML) attacks. In one type of AML attack, the adversary trains a surrogate classifier (called the attacker’s classifier) to produce intelligently crafted low-power “perturbations” that degrade the accuracy of the targeted (defender’s) classifier. In this paper, we focus on radio frequency (RF) signal classifiers, and study their vulnerabilities to AML attacks. Specifically, we consider several exemplary protocol and modulation classifiers, designed using convolutional neural networks (CNNs) and recurrent neural networks (RNNs). We first show the high accuracy of such classifiers under random noise (AWGN). We then study their performance under three types of low-power AML perturbations (FGSM, PGD, and DeepFool), considering different amounts of information at the attacker. On one extreme (so-called “white-box” attack), the attacker has complete knowledge of the defender’s classifier and its training data. As expected, our results reveal that in this case, the AML attack significantly degrades the defender’s classification accuracy. We gradually reduce the attacker’s knowledge and study five attack scenarios that represent different amounts of information at the attacker. Surprisingly, even when the attacker has limited or no knowledge of the defender’s classifier and its power is relatively low, the attack is still significant. We also study various practical issues related to the wireless environment, including channel impairments and misalignment between attacker and transmitter signals. Furthermore, we study the effectiveness of intermittent AML attacks. Even under such imperfections, a low-power AML attack can still significantly reduce the defender’s classification accuracy for both protocol and modulation classifiers. Lastly, we propose a two-step adversarial training mechanism to defend against AML attacks and contrast its performance against other state-of-the-art defense strategies. The proposed defense approach increases the classification accuracy by up to 50%, even in scenarios where the attacker has perfect knowledge of the defender and exhibits a relatively large power budget.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"261-279"},"PeriodicalIF":0.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10436107","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139937058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Buyers Collusion in Incentivized Forwarding Networks: A Multi-Agent Reinforcement Learning Study 激励转发网络中的买家串通:多代理强化学习研究
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-02-12 DOI: 10.1109/TMLCN.2024.3365420
Mostafa Ibrahim;Sabit Ekin;Ali Imran
{"title":"Buyers Collusion in Incentivized Forwarding Networks: A Multi-Agent Reinforcement Learning Study","authors":"Mostafa Ibrahim;Sabit Ekin;Ali Imran","doi":"10.1109/TMLCN.2024.3365420","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3365420","url":null,"abstract":"We present the issue of monetarily incentivized forwarding in a multi-hop mesh network architecture from an economic perspective. It is anticipated that credit-incentivized forwarding and relaying will be a simple method of exchanging transmission power and spectrum for connectivity. However, gateways and forwarding nodes, like any other free market, may create an oligopolistic market for the users they serve. In this study, a coalition scheme between buyers aims to address price control by gateways or nodes closer to gateways. In a Stackelberg competition game, buyer agents (users) and sellers (gateways) make decisions using reinforcement learning (RL), with decentralized Deep Q-Networks to buy and sell forwarding resources. We allow communication links between the buyers with a limited messaging space, without defining a collusion mechanism. The idea is to demonstrate that through messaging, and RL tacit collusion can emerge between agents in a decentralized setup. The multi-agent reinforcement learning (MARL) system is presented and analyzed from a machine-learning perspective. Moreover, MARL dynamics are discussed via mean field analysis to better understand divergence causes and make implementation recommendations for such systems. Finally, the simulation results show the results of coordination among the users.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"240-260"},"PeriodicalIF":0.0,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10433203","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139908570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GAN-Based Evasion Attack in Filtered Multicarrier Waveforms Systems 滤波多载波波形系统中基于 GAN 的规避攻击
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-02-02 DOI: 10.1109/TMLCN.2024.3361834
Kawtar Zerhouni;Gurjot Singh Gaba;Mustapha Hedabou;Taras Maksymyuk;Andrei Gurtov;El Mehdi Amhoud
{"title":"GAN-Based Evasion Attack in Filtered Multicarrier Waveforms Systems","authors":"Kawtar Zerhouni;Gurjot Singh Gaba;Mustapha Hedabou;Taras Maksymyuk;Andrei Gurtov;El Mehdi Amhoud","doi":"10.1109/TMLCN.2024.3361834","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3361834","url":null,"abstract":"Generative adversarial networks (GANs), a category of deep learning models, have become a cybersecurity concern for wireless communication systems. These networks enable potential attackers to deceive receivers that rely on convolutional neural networks (CNNs) by transmitting deceptive wireless signals that are statistically indistinguishable from genuine ones. While GANs have been used before for digitally modulated single-carrier waveforms, this study explores their applicability to model filtered multi-carrier waveforms, such as orthogonal frequency-division multiplexing (OFDM), filtered orthogonal FDM (F-OFDM), generalized FDM (GFDM), filter bank multi-carrier (FBMC), and universal filtered MC (UFMC). In this research, an evasion attack is conducted using GAN-generated counterfeit filtered multi-carrier signals to trick the target receiver. The results show that there is a remarkable 99.7% probability of the receiver misclassifying these GAN-based fabricated signals as authentic ones. This highlights the need for urgent investigation into the development of preventive measures to address this concerning vulnerability.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"210-220"},"PeriodicalIF":0.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10419091","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139715175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unleashing the Potential of Knowledge Distillation for IoT Traffic Classification 释放知识提炼在物联网流量分类中的潜力
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-01-31 DOI: 10.1109/TMLCN.2024.3360915
Mahmoud Abbasi;Amin Shahraki;Javier Prieto;Angélica González Arrieta;Juan M. Corchado
{"title":"Unleashing the Potential of Knowledge Distillation for IoT Traffic Classification","authors":"Mahmoud Abbasi;Amin Shahraki;Javier Prieto;Angélica González Arrieta;Juan M. Corchado","doi":"10.1109/TMLCN.2024.3360915","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3360915","url":null,"abstract":"The Internet of Things (IoT) has revolutionized our lives by generating large amounts of data, however, the data needs to be collected, processed, and analyzed in real-time. Network Traffic Classification (NTC) in IoT is a crucial step for optimizing network performance, enhancing security, and improving user experience. Different methods are introduced for NTC, but recently Machine Learning solutions have received high attention in this field, however, Traditional Machine Learning (ML) methods struggle with the complexity and heterogeneity of IoT traffic, as well as the limited resources of IoT devices. Deep learning shows promise but is computationally intensive for resource-constrained IoT devices. Knowledge distillation is a solution to help ML by compressing complex models into smaller ones suitable for IoT devices. In this paper, we examine the use of knowledge distillation for IoT traffic classification. Through experiments, we show that the student model achieves a balance between accuracy and efficiency. It exhibits similar accuracy to the larger teacher model while maintaining a smaller size. This makes it a suitable alternative for resource-constrained scenarios like mobile or IoT traffic classification. We find that the knowledge distillation technique effectively transfers knowledge from the teacher model to the student model, even with reduced training data. The results also demonstrate the robustness of the approach, as the student model performs well even with the removal of certain classes. Additionally, we highlight the trade-off between model capacity and computational cost, suggesting that increasing model size beyond a certain point may not be beneficial. The findings emphasize the value of soft labels in training student models with limited data resources.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"221-239"},"PeriodicalIF":0.0,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10417087","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139715176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Active and Passive Beamforming for IRS-Assisted Monostatic Backscatter Systems: An Unsupervised Learning Approach IRS 辅助单静态反向散射系统的联合主动和被动波束成形:无监督学习方法
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-01-17 DOI: 10.1109/TMLCN.2024.3355317
Sahar Idrees;Salman Durrani;Zhiwei Xu;Xiaolun Jia;Xiangyun Zhou
{"title":"Joint Active and Passive Beamforming for IRS-Assisted Monostatic Backscatter Systems: An Unsupervised Learning Approach","authors":"Sahar Idrees;Salman Durrani;Zhiwei Xu;Xiaolun Jia;Xiangyun Zhou","doi":"10.1109/TMLCN.2024.3355317","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3355317","url":null,"abstract":"Backscatter Communication (BackCom) has been envisioned as a key enabler for ubiquitous connectivity in the Internet of Things (IoT). However, the inherent issues of limited range and low achievable bit rate are prominent barriers to the widespread deployment of BackCom. In this work, we address these challenges by considering a monostatic BackCom system assisted by an intelligent reflecting surface (IRS) and controlled seamlessly by data driven deep learning (DL) based approach. We propose a deep residual neural network (DRCNN) BackIRS-Net that exploits the unique coupling between the IRS phase shifts and the beamforming at the reader, to jointly optimize these quantities in order to maximize the effective signal to noise ratio (SNR) of the backscatter signal received at the reader. We show that the performance of a trained BackIRS-Net is close to the conventional optimization based approach while requiring much less computational complexity and time, which indicates the utility of this scheme for real-time deployment. Our results show that an IRS of moderate size can significantly improve backscatter SNR, resulting in range extension by a factor of 4 for monostatic BackCom, which is an important improvement in the context of BackCom based IoT systems.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1389-1403"},"PeriodicalIF":0.0,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10401960","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142276439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Resource Allocation Policy: Vertex-GNN or Edge-GNN? 学习资源分配策略:顶点网络(Vertex-GNN)还是边缘网络(Edge-GNN)?
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-01-16 DOI: 10.1109/TMLCN.2024.3354872
Yao Peng;Jia Guo;Chenyang Yang
{"title":"Learning Resource Allocation Policy: Vertex-GNN or Edge-GNN?","authors":"Yao Peng;Jia Guo;Chenyang Yang","doi":"10.1109/TMLCN.2024.3354872","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3354872","url":null,"abstract":"Graph neural networks (GNNs) update the hidden representations of vertices (called Vertex-GNNs) or hidden representations of edges (called Edge-GNNs) by processing and pooling the information of neighboring vertices and edges and combining to exploit topology information. When learning resource allocation policies, GNNs cannot perform well if their expressive power is weak, i.e., if they cannot differentiate all input features such as channel matrices. In this paper, we analyze the expressive power of the Vertex-GNNs and Edge-GNNs for learning three representative wireless policies: link scheduling, power control, and precoding policies. We find that the expressive power of the GNNs depends on the linearity and output dimensions of the processing and combination functions. When linear processors are used, the Vertex-GNNs cannot differentiate all channel matrices due to the loss of channel information, while the Edge-GNNs can. When learning the precoding policy, even the Vertex-GNNs with non-linear processors may not be with strong expressive ability due to the dimension compression. We proceed to provide necessary conditions for the GNNs to well learn the precoding policy. Simulation results validate the analyses and show that the Edge-GNNs can achieve the same performance as the Vertex-GNNs with much lower training and inference time.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"190-209"},"PeriodicalIF":0.0,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10401242","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139572649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信