IEEE Transactions on Machine Learning in Communications and Networking最新文献

筛选
英文 中文
Optimizing Immersive Services With Parallel In-Network Rendering and Deep RL 利用并行网络渲染和深度强化学习优化沉浸式服务
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2026-02-20 DOI: 10.1109/TMLCN.2026.3666742
Manel Gherari;Adyson Maia;Mouhamad Dieye;Halima Elbiaze;Yacine Ghamri-Doudane;Roch H. Glitho
{"title":"Optimizing Immersive Services With Parallel In-Network Rendering and Deep RL","authors":"Manel Gherari;Adyson Maia;Mouhamad Dieye;Halima Elbiaze;Yacine Ghamri-Doudane;Roch H. Glitho","doi":"10.1109/TMLCN.2026.3666742","DOIUrl":"https://doi.org/10.1109/TMLCN.2026.3666742","url":null,"abstract":"This paper addresses the challenge of delivering low-latency, scalable immersive experiences by exploiting a hybrid continuum of cloud, edge, and In-Network Computing (INC) resources. Indeed, delivering low-latency, scalable immersive experiences requires the transfer of a large amount of digital assets of different sizes, many of them consisting of large, static scene elements corresponding to service-specific and user-specific components. We argue in this paper that such elements could be separated within an in-network rendering farm while dynamically caching popular assets and synchronizing rapidly changing, user-centric data at INC, Edge or Cloud nodes. Still all theses need to be orchestrated efficiently. To efficiently orchestrate these heterogeneous resources, we formulate in this paper a multi-objective optimization problem—maximizing resource efficiency, minimizing end-to-end latency, and maximizing user request acceptance. This optimization problem is then solved via a deep reinforcement learning (DRL) framework that adaptively assigns functions across all layers in real time. The purpose of our proposed popularity-based replication and pre-caching is to further reduce latency for the most frequently accessed assets, while we offload lightweight rendering operations directly onto programmable switches to cut down on round-trip delays. Extensive simulations, benchmarked against multiple baselines, demonstrate that our approach consistently maintains sub-20ms end-to-end delays and achieves superior resource utilization efficiency under dynamic workloads. These results validate the potential of integrating INC into the Compute Continuum and use a DRL-driven orchestration, both together allowing to meet the stringent Quality of Service (QoS) and Quality of Experience (QoE) requirements of next-generation immersive applications.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"491-513"},"PeriodicalIF":0.0,"publicationDate":"2026-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11402906","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalization-Enhanced Channel Estimation Through Adaptive Interpolation and Multi-Task Learning-Based Denoising Network 基于自适应插值和多任务学习去噪网络的广义增强信道估计
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2026-02-16 DOI: 10.1109/TMLCN.2026.3664840
Bolin Wang;Li Chen;Xiaohui Chen;Weidong Wang
{"title":"Generalization-Enhanced Channel Estimation Through Adaptive Interpolation and Multi-Task Learning-Based Denoising Network","authors":"Bolin Wang;Li Chen;Xiaohui Chen;Weidong Wang","doi":"10.1109/TMLCN.2026.3664840","DOIUrl":"https://doi.org/10.1109/TMLCN.2026.3664840","url":null,"abstract":"Accurate CSI estimation with low pilots is desirable for the multiple-input multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) system. In the existing channel estimation methods, both interpolation and denoising suffer from the problem of generalization. In this paper, we propose an adaptive interpolation and multi-task learning denoising network for generalization-enhanced CSI estimation. First, we model the wireless channel based on Gaussian process (GP) and use Bayesian optimization (BO) to find the optimal parameters of the Matérn kernel for interpolation. For each matrix, we can find the most suitable parameters of the kernel to achieve precise interpolation adaptively. Then, we design the multi-task residual network (MT-Net) based on multi-task learning. In MT-Net, shared layers are employed to utilize the relevant information between multiple tasks. And task-specific layers are also designed to extract the characteristics of each task. Compared to single-task learning, MT-Net can achieve information sharing between multiple tasks to enhance the scenario generalization of the model. Simulation results show that when the application scenario changes, our method exhibits a stronger generalization ability compared to other neural network-assisted methods.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"528-541"},"PeriodicalIF":0.0,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11397076","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Generator Continual Learning for Robust Delay Prediction in 6G 基于多发生器连续学习的6G鲁棒延迟预测
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2026-02-10 DOI: 10.1109/TMLCN.2026.3663092
Xiaoyu Lan;Jalil Taghia;Hannes Larsson;Andreas Johnsson
{"title":"Multi-Generator Continual Learning for Robust Delay Prediction in 6G","authors":"Xiaoyu Lan;Jalil Taghia;Hannes Larsson;Andreas Johnsson","doi":"10.1109/TMLCN.2026.3663092","DOIUrl":"https://doi.org/10.1109/TMLCN.2026.3663092","url":null,"abstract":"In future 6G networks, dependable networks will enable telecommunication services such as remote control of robots or vehicles with strict requirements on end-to-end network performance in terms of delay, delay variation, tail distributions, and throughput. With respect to such networks, it is paramount to be able to determine what performance level the network segment can guarantee at a given point in time. One promising approach is to use predictive models trained using machine learning (ML). Predicting performance metrics such as one-way delay (OWD), in a timely manner, provides valuable insights for the network, user equipments (UEs), and applications to address performance trends, deviations, and violations. Over the course of time, a dynamic network environment results in distributional shifts, which causes catastrophic forgetting and drop of ML model performance. In continual learning (CL), the model aims to achieve a balance between stability and plasticity, enabling new information to be learned while preserving previously learned knowledge. In this paper, we target on the challenges of catastrophic forgetting of OWD prediction model. We propose a novel approach which introducing the concept of multi-generator for the state-of-the-art CL generative replay framework, along with tabular variational autoencoders (TVAE) as generators. The domain knowledge of UE capabilities is incorporated into the learning process for determining generator setup and relevance. The proposed approach is evaluated across a diverse set of scenarios with data that is collected in a realistic 5G testbed, demonstrating its outstanding performance in comparison to baselines.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"457-472"},"PeriodicalIF":0.0,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11390699","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Peering Partner Recommendation for ISPs Using Machine Learning 使用机器学习的互联网服务提供商对等合作伙伴建议
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2026-02-06 DOI: 10.1109/TMLCN.2026.3661907
Md Ibrahim Ibne Alam;Ankur Senapati;Anindo Mahmood;Murat Yuksel;Koushik Kar
{"title":"Peering Partner Recommendation for ISPs Using Machine Learning","authors":"Md Ibrahim Ibne Alam;Ankur Senapati;Anindo Mahmood;Murat Yuksel;Koushik Kar","doi":"10.1109/TMLCN.2026.3661907","DOIUrl":"https://doi.org/10.1109/TMLCN.2026.3661907","url":null,"abstract":"Global Internet connectivity heavily relies on interconnection among Internet Service Providers (ISPs), achieved by accessing transit services or establishing direct peering relationships through Internet eXchange Points (IXPs). The latter offers more room for ISP-specific optimizations and is preferred but often involves a lengthy and convoluted process to set up peering agreements. Automating peering partner selection can greatly reduce the complexity. In this paper, we explore the use of publicly available data on ISPs to develop a machine learning (ML) approach that can predict whether ISP pairs should peer or not. First, we construct a large-scale dataset by processing and integrating information from public repositories (e.g., PeeringDB, CAIDA) and extract a diverse set of autonomous system-level features as inputs to ML models. We then evaluate the performance of three broad classes of ML models, i.e., tree-based, neural network-based, and transformer-based, to predict peering relationships. Among them, tree-based models achieve the best performance in our experiments, with XGBoost achieving a 98% accuracy, strong balanced-accuracy and F1-score performance in predicting peering partners. In addition, the model exhibits high robustness to variations in time, geographic region, and data incompleteness, indicating that it generalizes well in the rapidly evolving Internet landscape. We envision that ISPs can adopt our method towards automating their peering partner selection process, thus transitioning to a more efficient and optimized Internet ecosystem.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"514-527"},"PeriodicalIF":0.0,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11373593","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Receiver-Agnostic Radio Frequency Fingerprint Identification via Federated Learning 基于联邦学习的射频指纹识别
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2026-02-06 DOI: 10.1109/TMLCN.2026.3661912
Faiza Gul;Xiangyun Zhou;Amanda S. Barnard;Salman Durrani
{"title":"Receiver-Agnostic Radio Frequency Fingerprint Identification via Federated Learning","authors":"Faiza Gul;Xiangyun Zhou;Amanda S. Barnard;Salman Durrani","doi":"10.1109/TMLCN.2026.3661912","DOIUrl":"https://doi.org/10.1109/TMLCN.2026.3661912","url":null,"abstract":"Ensuring secure and reliable wireless connectivity is essential for modern Internet of Things (IoT) applications. Radio frequency fingerprint identification (RFFI) has emerged as a promising lightweight device authentication mechanism by leveraging unique hardware-induced features in transmitted signals. This paper proposes a federated RFFI framework specifically designed to tackle open challenges associated with receiver drift, label skewed data distribution and client selection. The framework introduces a receiver-agnostic training scheme based on adversarial learning in a distributed setting, enabling the global model to suppress receiver-specific features while retaining transmitter-distinctive representations. Evaluations on a real-world dataset confirm that the proposed federated RFFI framework achieves improved transmitter classification accuracy on previously unseen receivers by up to 40% compared to baseline non-adversarial approach. It also presents a systematic analysis of label-skewed data distributions, revealing that model performance degrades as skew increases and motivating the development of strategies to address this issue. To that end, a Label Loss Driven client selection strategy is proposed, which prioritizes the most informative clients based on their contribution to transmitter classification accuracy, resulting in faster convergence and improved generalization. Under high label skew, the proposed client selection strategy achieves a convergence improvement of 49–51% over baselines, with communication overhead reduced by 27–49% and computation overhead by about 50%. Overall, this work provides a practical and effective solution for deploying RFFI in scalable, resource-constrained IoT systems.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"473-490"},"PeriodicalIF":0.0,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11373634","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Semantic Token Communication for Transformer-Based Edge Inference 基于变压器边缘推理的自适应语义令牌通信
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2026-01-30 DOI: 10.1109/TMLCN.2026.3659819
Alessio Devoto;Jary Pomponi;Mattia Merluzzi;Paolo Di Lorenzo;Simone Scardapane
{"title":"Adaptive Semantic Token Communication for Transformer-Based Edge Inference","authors":"Alessio Devoto;Jary Pomponi;Mattia Merluzzi;Paolo Di Lorenzo;Simone Scardapane","doi":"10.1109/TMLCN.2026.3659819","DOIUrl":"https://doi.org/10.1109/TMLCN.2026.3659819","url":null,"abstract":"This paper presents an adaptive framework for edge inference based on a dynamically configurable transformer-powered deep joint source channel coding (DJSCC) architecture. Motivated by a practical scenario where a resource constrained edge device engages in goal oriented semantic communication, such as selectively transmitting essential features for object detection to an edge server, our approach enables efficient task aware data transmission under varying bandwidth and channel conditions. To achieve this, input data is tokenized into compact high level semantic representations, refined by a transformer, and transmitted over noisy wireless channels. As part of the DJSCC pipeline, we employ a semantic token selection mechanism that adaptively compresses informative features into a user specified number of tokens per sample. These tokens are then further compressed through the JSCC module, enabling a flexible token communication strategy that adjusts both the number of transmitted tokens and their embedding dimensions. We also incorporate a resource allocation algorithm based on Lyapunov stochastic optimization to enhance robustness under dynamic network conditions, effectively balancing compression efficiency and task performance. Experimental results demonstrate that our system consistently outperforms existing baselines, highlighting its potential as a strong foundation for AI native semantic communication in edge intelligence applications.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"422-437"},"PeriodicalIF":0.0,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11369909","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributionally Robust Federated Learning With Client Drift Minimization 基于客户端漂移最小化的分布式鲁棒联邦学习
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2026-01-26 DOI: 10.1109/TMLCN.2026.3658026
Mounssif Krouka;Chaouki Ben Issaid;Mehdi Bennis
{"title":"Distributionally Robust Federated Learning With Client Drift Minimization","authors":"Mounssif Krouka;Chaouki Ben Issaid;Mehdi Bennis","doi":"10.1109/TMLCN.2026.3658026","DOIUrl":"https://doi.org/10.1109/TMLCN.2026.3658026","url":null,"abstract":"Federated learning (FL) faces critical challenges, particularly in heterogeneous environments where non-independent and identically distributed (non-IID) data across clients can lead to unfair and inefficient model performance. We introduce DRDM, a novel algorithm that integrates distributionally robust optimization (DRO) with dynamic regularization to explicitly mitigate client drift. Compared to previous approaches that address robustness or drift separately, DRDM combines both aspects within a unified framework, dynamically aligning local updates with the global robust objective to improve convergence toward a worst-case optimal model while maintaining fairness across clients. The robust objective is optimized through efficient local updates, which significantly reduce the number of communication rounds. We provide a theoretical convergence analysis for convex smooth objectives under partial client participation and multiple local update steps. Experiments on three benchmark datasets, covering various model architectures and levels of data heterogeneity, show that DRDM consistently improves worst-case test accuracy while requiring fewer communication rounds than state-of-the-art baselines. Furthermore, we analyze the impact of signal-to-noise ratio (SNR) and bandwidth on energy consumption, demonstrating that adaptive selection of local updates can achieve a target worst-case accuracy with minimal total energy cost across diverse communication environments.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"438-456"},"PeriodicalIF":0.0,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11363576","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QoS Prediction for Satellite-Based Avionic Communication Using Transformers 基于变压器的星载航空电子通信QoS预测
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2026-01-13 DOI: 10.1109/TMLCN.2026.3653719
Hind Mukhtar;Raymond Schaub;Melike Erol-Kantarci
{"title":"QoS Prediction for Satellite-Based Avionic Communication Using Transformers","authors":"Hind Mukhtar;Raymond Schaub;Melike Erol-Kantarci","doi":"10.1109/TMLCN.2026.3653719","DOIUrl":"https://doi.org/10.1109/TMLCN.2026.3653719","url":null,"abstract":"Satellite-based communication systems are crucial for providing high-speed data services in aviation, particularly for business aviation operations that demand global connectivity. These systems face challenges from numerous interdependent factors, such as satellite handovers, congestion, flight maneuvers, and seasonal variations, making accurate Quality of Service (QoS) prediction complex. Currently, there is no established methodology for predicting QoS in avionic communication systems. This paper addresses this gap by proposing machine learning-based approaches for pre-flight QoS prediction. Specifically, we leverage transformer models to predict QoS along a given flight path using real-world data. The model takes as input a variety of positional and network-related features, such as aircraft location, satellite information, historical QoS, and handover probabilities, and outputs a predicted performance score for each position along the flight. This approach allows for proactive decision-making, enabling flight crews to select the most optimal flight paths before departure, improving overall operational efficiency in business aviation. Our proposed encoder-decoder transformer model achieved an overall prediction accuracy of 65% and an RMSE of 1.91, representing a significant improvement over traditional baseline methods. While these metrics are notable, our model’s key contribution is a substantial improvement in prediction accuracy for underrepresented classes, which were a major limitation of prior approaches. Additionally, the model significantly reduces inference time, achieving predictions in 40 seconds compared to 6,353 seconds for a traditional KNN model. This approach allows for proactive decision-making, enabling flight crews to select optimal flight paths before departure, improving overall operational efficiency in business aviation.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"300-317"},"PeriodicalIF":0.0,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11348973","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dictionary Learning for Phase-Less Beam Alignment Codebook Design in Multipath Channels 基于字典学习的多径信道无相波束对准码本设计
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2026-01-12 DOI: 10.1109/TMLCN.2026.3653010
Benjamin W. Domae;Danijela Cabric
{"title":"Dictionary Learning for Phase-Less Beam Alignment Codebook Design in Multipath Channels","authors":"Benjamin W. Domae;Danijela Cabric","doi":"10.1109/TMLCN.2026.3653010","DOIUrl":"https://doi.org/10.1109/TMLCN.2026.3653010","url":null,"abstract":"Large antenna arrays are critical for reliability and high data rates in wireless networks at millimeter-wave and sub-terahertz bands. While traditional methods for initial beam alignment for analog phased arrays scale beam alignment overhead linearly with the array size, compressive sensing (CS) and machine learning (ML) algorithms can scale logarithmically. CS and ML methods typically utilize pseudo-random or heuristic beam designs as compressive codebooks. However, these codebooks may not be optimal for scenarios with uncertain array impairments or multipath, particularly when measurements are phase-less or power-based. In this work, we propose a novel dictionary learning method to design codebooks for phase-less beam alignment given multipath and unknown impairment statistics. This codebook learning algorithm uses an alternating optimization with block coordinate descent to update the codebooks and Monte Carlo trials over multipath and impairments to incorporate a-priori knowledge of the hardware and environment. Additionally, we discuss engineering considerations for the codebook design algorithm, including a comparison of three proposed loss functions and three proposed beam alignment algorithms used for codebook learning. As one of the three beam alignment methods, we propose transfer learning for ML-based beam alignment to reduce the training time of both the ML model and codebook learning. We demonstrate that codebook learning and our ML-based beam alignment algorithms can significantly reduce the beam alignment overhead in terms of number of measurements required.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"318-336"},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11346817","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UCINet0: A Machine Learning-Based Receiver for 5G NR PUCCH Format 0 UCINet0:基于机器学习的5G NR PUCCH格式接收器
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2026-01-05 DOI: 10.1109/TMLCN.2025.3650730
Jeeva Keshav Sattianarayanin;Anil Kumar Yerrapragada;Radha Krishna Ganti
{"title":"UCINet0: A Machine Learning-Based Receiver for 5G NR PUCCH Format 0","authors":"Jeeva Keshav Sattianarayanin;Anil Kumar Yerrapragada;Radha Krishna Ganti","doi":"10.1109/TMLCN.2025.3650730","DOIUrl":"https://doi.org/10.1109/TMLCN.2025.3650730","url":null,"abstract":"Accurate decoding of Uplink Control Information (UCI) on the Physical Uplink Control Channel (PUCCH) is essential for enabling 5G wireless links. This paper explores an AI/ML-based receiver design for PUCCH Format 0. Format 0 signaling encodes the UCI content within the phase of a known base waveform and even supports multiplexing of up to 12 users within the same time-frequency resources. The proposed neural network classifier, which we term UCINet0, is capable of predicting when no user is transmitting on the PUCCH, as well as decoding the UCI content for any number of multiplexed users (up to 12). The test results with simulated, hardware-captured (lab) and field datasets show that the UCINet0 model outperforms conventional correlation-based decoders across all Signal-to-Noise Ratio (SNR) ranges and multiple fading scenarios.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"4 ","pages":"282-299"},"PeriodicalIF":0.0,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11328864","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书