{"title":"Information Bottleneck-Based Domain Adaptation for Hybrid Deep Learning in Scalable Network Slicing","authors":"Tianlun Hu;Qi Liao;Qiang Liu;Georg Carle","doi":"10.1109/TMLCN.2024.3485520","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3485520","url":null,"abstract":"Network slicing enables operators to efficiently support diverse applications on a shared infrastructure. However, the evolving complexity of networks, compounded by inter-cell interference, necessitates agile and adaptable resource management. While deep learning offers solutions for coping with complexity, its adaptability to dynamic configurations remains limited. In this paper, we propose a novel hybrid deep learning algorithm called IDLA (integrated deep learning with the Lagrangian method). This integrated approach aims to enhance the scalability, flexibility, and robustness of slicing resource allocation solutions by harnessing the high approximation capability of deep learning and the strong generalization of classical non-linear optimization methods. Then, we introduce a variational information bottleneck (VIB)-assisted domain adaptation (DA) approach to enhance integrated deep learning and Lagrangian method (IDLA)’s adaptability across diverse network environments and conditions. We propose pre-training a variational information bottleneck (VIB)-based Quality of Service (QoS) estimator, using slice-specific inputs shared across all source domain slices. Each target domain slice can deploy this estimator to predict its QoS and optimize slice resource allocation using the IDLA algorithm. This VIB-based estimator is continuously fine-tuned with a mixture of samples from both the source and target domains until convergence. Evaluating on a multi-cell network with time-varying slice configurations, the VIB-enhanced IDLA algorithm outperforms baselines such as heuristic and deep reinforcement learning-based solutions, achieving twice the convergence speed and 16.52% higher asymptotic performance after slicing configuration changes. Transferability assessment demonstrates a 25.66% improvement in estimation accuracy with VIB, especially in scenarios with significant domain gaps, highlighting its robustness and effectiveness across diverse domains.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1642-1660"},"PeriodicalIF":0.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10734592","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Polarization-Aware Channel State Prediction Using Phasor Quaternion Neural Networks","authors":"Anzhe Ye;Haotian Chen;Ryo Natsuaki;Akira Hirose","doi":"10.1109/TMLCN.2024.3485521","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3485521","url":null,"abstract":"The performance of a wireless communication system depends to a large extent on the wireless channel. Due to the multipath fading environment during the radio wave propagation, channel prediction plays a vital role to enable adaptive transmission for wireless communication systems. Predicting various channel characteristics by using neural networks can help address more complex communication environments. However, achieving this goal typically requires the simultaneous use of multiple distinct neural models, which is undoubtedly unaffordable for mobile communications. Therefore, it is necessary to enable a simpler structure to simultaneously predict multiple channel characteristics. In this paper, we propose a fading channel prediction method using phasor quaternion neural networks (PQNNs) to predict the polarization states, with phase information involved to enhance the channel compensation ability. We evaluate the performance of the proposed PQNN method in two different fading situations in an actual environment, and we find that the proposed scheme provides 2.8 dB and 4.0 dB improvements at bit error rate (BER) of \u0000<inline-formula> <tex-math>$10^{-4}$ </tex-math></inline-formula>\u0000, showing better BER performance in light and serious fading situations, respectively. This work also reveals that by treating polarization information and phase information as a single entity, the model can leverage their physical correlation to achieve improved performance.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1628-1641"},"PeriodicalIF":0.0,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10731896","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TWIRLD: Transformer Generated Terahertz Waveform for Improved Radio Link Distance","authors":"Shuvam Chakraborty;Claire Parisi;Dola Saha;Ngwe Thawdar","doi":"10.1109/TMLCN.2024.3483111","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3483111","url":null,"abstract":"terahertz (THz) band communication is envisioned as one of the leading technologies to meet the exponentially growing data rate requirements of emerging and future wireless communication networks. Utilizing the contiguous bandwidth available at THz frequencies requires a transceiver design tailored to tackle issues existing at these frequencies such as strong propagation and absorption loss, small scale fading (e.g. scattering, reflection, refraction), hardware non-linearity, etc. In prior works, multicarrier waveforms (e.g., Orthogonal Frequency Division Multiplexing (OFDM)) are shown to be efficient in tackling some of these issues. However, OFDM introduces a drawback in the form of peak-to-average power ratio (PAPR) which, compounded with strong propagation and absorption loss and high noise power due to large bandwidth at THz and sub-THz frequencies, severely limits link distances and, in turn, capacity, preventing efficient bandwidth usage. In this work, we propose \u0000<monospace>TWIRLD</monospace>\u0000 - a deep learning (DL)-based joint optimization method, modeled and implemented as components of end-to-end transceiver chain. \u0000<monospace>TWIRLD</monospace>\u0000 performs a symbol remapping at baseband of OFDM signals, which increases average transmit power while also optimizing the bit error rate (BER). We provide theoretical analysis, statistical equivalence of \u0000<monospace>TWIRLD</monospace>\u0000 to the ideal receiver, and comprehensive complexity and footprint estimates. We validate \u0000<monospace>TWIRLD</monospace>\u0000 in simulation showing link distance improvement of up to 91% and compare the results with legacy and state of the art methods and their enhanced versions. Finally, \u0000<monospace>TWIRLD</monospace>\u0000 is validated with over the air (OTA) communication using a state-of-the-art testbed at 140 GHz up to a bandwidth of 5 GHz where we observe improvement of up to 79% in link distance accommodating for practical channel and other transmission losses.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1595-1614"},"PeriodicalIF":0.0,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10720922","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142550544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recursive GNNs for Learning Precoding Policies With Size-Generalizability","authors":"Jia Guo;Chenyang Yang","doi":"10.1109/TMLCN.2024.3480044","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3480044","url":null,"abstract":"Graph neural networks (GNNs) have been shown promising in optimizing power allocation and link scheduling with good size generalizability and low training complexity. These merits are important for learning wireless policies under dynamic environments, which partially come from the matched permutation equivariance (PE) properties of the GNNs to the policies to be learned. Nonetheless, it has been noticed in literature that only satisfying the PE property of a precoding policy in multi-antenna systems cannot ensure a GNN for learning precoding to be generalizable to the unseen problem scales. Incorporating models with GNNs helps improve size generalizability, which however is only applicable to specific problems, settings, and algorithms. In this paper, we propose a framework of size generalizable GNNs for learning precoding policies that are purely data-driven and can learn wireless policies including but not limited to baseband and hybrid precoding in multi-user multi-antenna systems. To this end, we first find a special structure of each iteration of several numerical algorithms for optimizing precoding, from which we identify the key characteristics of a GNN that affect its size generalizability. Then, we design size-generalizable GNNs that are with these key characteristics and satisfy the PE properties of precoding policies in a recursive manner. Simulation results show that the proposed GNNs can be well-generalized to the number of users for learning baseband and hybrid precoding policies, require much fewer samples than existing GNNs and shorter inference time than numerical algorithms to achieve the same performance.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1558-1579"},"PeriodicalIF":0.0,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10716720","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142540477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"NeIL: Intelligent Replica Selection for Distributed Applications","authors":"Faraz Ahmed;Lianjie Cao;Ayush Goel;Puneet Sharma","doi":"10.1109/TMLCN.2024.3479109","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3479109","url":null,"abstract":"Distributed applications such as cloud gaming, streaming, etc., are increasingly using edge-to-cloud infrastructure for high availability and performance. While edge infrastructure brings services closer to the end-user, the number of sites on which the services need to be replicated has also increased. This makes replica selection challenging for clients of the replicated services. Traditional replica selection methods including anycast based methods and DNS re-directions are performance agnostic, and clients experience degraded network performance when network performance dynamics are not considered in replica selection. In this work, we present a client-side replica selection framework NeIL, that enables network performance aware replica selection. We propose to use bandits with experts based Multi-Armed Bandit (MAB) algorithms and adapt these algorithms for replica selection at individual clients without centralized coordination. We evaluate our approach using three different setups including a distributed Mininet setup where we use publicly available network performance data from the Measurement Lab (M-Lab) to emulate network conditions, a setup where we deploy replica servers on AWS, and finally we present results from a global enterprise deployment. Our experimental results show that in comparison to greedy selection, NeIL performs better than greedy for 45% of the time and better than or equal to greedy selection for 80% of the time resulting in a net gain in end-to-end network performance. On AWS, we see similar results where NeIL performs better than or equal to greedy for 75% of the time. We have successfully deployed NeIL in a global enterprise remote device management service with over 4000 client devices and our analysis shows that NeIL achieves significantly better tail service quality by cutting the \u0000<inline-formula> <tex-math>$99th$ </tex-math></inline-formula>\u0000 percentile tail latency from 5.6 seconds to 1.7 seconds.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1580-1594"},"PeriodicalIF":0.0,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10714467","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142540425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Saqib;Halime Elbiaze;Roch H. Glitho;Yacine Ghamri-Doudane
{"title":"An Intelligent and Programmable Data Plane for QoS-Aware Packet Processing","authors":"Muhammad Saqib;Halime Elbiaze;Roch H. Glitho;Yacine Ghamri-Doudane","doi":"10.1109/TMLCN.2024.3475968","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3475968","url":null,"abstract":"One of the main features of data plane programmability is that it allows the easy deployment of a programmable network traffic management framework. One can build an early-stage Internet traffic classifier to facilitate effective Quality of Service (QoS) provisioning. However, maintaining accuracy and efficiency (i.e., processing delay/pipeline latency) in early-stage traffic classification is challenging due to memory and operational constraints in the network data plane. Additionally, deploying network-wide flow-specific rules for QoS leads to significant memory usage and overheads. To address these challenges, we propose new architectural components encompassing efficient processing logic into the programmable traffic management framework. In particular, we propose a single feature-based traffic classification algorithm and a stateless QoS-aware packet scheduling mechanism. Our approach first focuses on maintaining accuracy and processing efficiency in early-stage traffic classification by leveraging a single input feature - sequential packet size information. We then use the classifier to embed the Service Level Objective (SLO) into the header of the packets. Carrying SLOs inside the packet allows QoS-aware packet processing through admission control-enabled priority queuing. The results show that most flows are properly classified with the first four packets. Furthermore, using the SLO-enabled admission control mechanism on top of the priority queues enables stateless QoS provisioning. Our approach outperforms the classical and objective-based priority queuing in managing heterogeneous traffic demands by improving network resource utilization.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1540-1557"},"PeriodicalIF":0.0,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10706883","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142443062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jakob Hoydis;Fayçal Aït Aoudia;Sebastian Cammerer;Florian Euchner;Merlin Nimier-David;Stephan Ten Brink;Alexander Keller
{"title":"Learning Radio Environments by Differentiable Ray Tracing","authors":"Jakob Hoydis;Fayçal Aït Aoudia;Sebastian Cammerer;Florian Euchner;Merlin Nimier-David;Stephan Ten Brink;Alexander Keller","doi":"10.1109/TMLCN.2024.3474639","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3474639","url":null,"abstract":"Ray tracing (RT) is instrumental in 6G research in order to generate spatially-consistent and environment-specific channel impulse responses (CIRs). While acquiring accurate scene geometries is now relatively straightforward, determining material characteristics requires precise calibration using channel measurements. We therefore introduce a novel gradient-based calibration method, complemented by differentiable parametrizations of material properties, scattering and antenna patterns. Our method seamlessly integrates with differentiable ray tracers that enable the computation of derivatives of CIRs with respect to these parameters. Essentially, we approach field computation as a large computational graph wherein parameters are trainable akin to weights of a neural network (NN). We have validated our method using both synthetic data and real-world indoor channel measurements, employing a distributed multiple-input multiple-output (MIMO) channel sounder.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1527-1539"},"PeriodicalIF":0.0,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10705152","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142442991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Smart Jamming Attack and Mitigation on Deep Transfer Reinforcement Learning Enabled Resource Allocation for Network Slicing","authors":"Shavbo Salehi;Hao Zhou;Medhat Elsayed;Majid Bavand;Raimundas Gaigalas;Yigit Ozcan;Melike Erol-Kantarci","doi":"10.1109/TMLCN.2024.3470760","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3470760","url":null,"abstract":"Network slicing is a pivotal paradigm in wireless networks enabling customized services to users and applications. Yet, intelligent jamming attacks threaten the performance of network slicing. In this paper, we focus on the security aspect of network slicing over a deep transfer reinforcement learning (DTRL) enabled scenario. We first demonstrate how a deep reinforcement learning (DRL)-enabled jamming attack exposes potential risks. In particular, the attacker can intelligently jam resource blocks (RBs) reserved for slices by monitoring transmission signals and perturbing the assigned resources. Then, we propose a DRL-driven mitigation model to mitigate the intelligent attacker. Specifically, the defense mechanism generates interference on unallocated RBs where another antenna is used for transmitting powerful signals. This causes the jammer to consider these RBs as allocated RBs and generate interference for those instead of the allocated RBs. The analysis revealed that the intelligent DRL-enabled jamming attack caused a significant 50% degradation in network throughput and 60% increase in latency in comparison with the no-attack scenario. However, with the implemented mitigation measures, we observed 80% improvement in network throughput and 70% reduction in latency in comparison to the under-attack scenario.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1492-1508"},"PeriodicalIF":0.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10699421","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142397284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimizing Resource Fragmentation in Virtual Network Function Placement Using Deep Reinforcement Learning","authors":"Ramy Mohamed;Marios Avgeris;Aris Leivadeas;Ioannis Lambadaris","doi":"10.1109/TMLCN.2024.3469131","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3469131","url":null,"abstract":"In the 6G wireless era, the strategical deployment of Virtual Network Functions (VNFs) within a network infrastructure that optimizes resource utilization while fulfilling performance criteria is critical for successfully implementing the Network Function Virtualization (NFV) paradigm across the Edge-to-Cloud continuum. This is especially prominent when resource fragmentation –where available resources become isolated and underutilized– becomes an issue due to the frequent reallocations of VNFs. However, traditional optimization methods often struggle to deal with the dynamic and complex nature of the VNF placement problem when fragmentation is considered. This study proposes a novel online VNF placement approach for Edge/Cloud infrastructures that utilizes Deep Reinforcement Learning (DRL) and Reward Constrained Policy Optimization (RCPO) to address this problem. We combine DRL’s adaptability with RCPO’s constraint incorporation capabilities to ensure that the learned policies satisfy the performance and resource constraints while minimizing resource fragmentation. Specifically, the VNF placement problem is first formulated as an offline-constrained optimization problem, and then we devise an online solver using Neural Combinatorial Optimization (NCO). Our method incorporates a metric called Resource Fragmentation Degree (RFD) to quantify fragmentation in the network. Using this metric and RCPO, our NCO agent is trained to make intelligent placement decisions that reduce fragmentation and optimize resource utilization. An error correction heuristic complements the robustness of the proposed framework. Through extensive testing in a simulated environment, the proposed approach is shown to outperform state-of-the-art VNF placement techniques when it comes to minimizing resource fragmentation under constraint satisfaction guarantees.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1475-1491"},"PeriodicalIF":0.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10695455","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142397282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dieter Coppens;Ben van Herbruggen;Adnan Shahid;Eli de Poorter
{"title":"Removing the Need for Ground Truth UWB Data Collection: Self-Supervised Ranging Error Correction Using Deep Reinforcement Learning","authors":"Dieter Coppens;Ben van Herbruggen;Adnan Shahid;Eli de Poorter","doi":"10.1109/TMLCN.2024.3469128","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3469128","url":null,"abstract":"Indoor positioning using UWB technology has gained interest due to its centimeter-level accuracy potential. However, multipath effects and non-line-of-sight conditions cause ranging errors between anchors and tags. Existing approaches for mitigating these ranging errors rely on collecting large labeled datasets, making them impractical for real-world deployments. This paper proposes a novel self-supervised deep reinforcement learning approach that does not require labeled ground truth data. A reinforcement learning agent uses the channel impulse response as a state and predicts corrections to minimize the error between corrected and estimated ranges. The agent learns, self-supervised, by iteratively improving corrections that are generated by combining the predictability of trajectories with filtering and smoothening. Experiments on real-world UWB measurements demonstrate comparable performance to state-of-the-art supervised methods, overcoming data dependency and lack of generalizability limitations. This makes self-supervised deep reinforcement learning a promising solution for practical and scalable UWB-ranging error correction.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1615-1627"},"PeriodicalIF":0.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10695458","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142565484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}