IEEE Transactions on Machine Learning in Communications and Networking最新文献

筛选
英文 中文
A New Heterogeneous Hybrid Massive MIMO Receiver With an Intrinsic Ability of Removing Phase Ambiguity of DOA Estimation via Machine Learning 一种基于机器学习消除DOA估计相位模糊的新型异构混合海量MIMO接收机
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-11-26 DOI: 10.1109/TMLCN.2024.3506874
Feng Shu;Baihua Shi;Yiwen Chen;Jiatong Bai;Yifan Li;Tingting Liu;Zhu Han;Xiaohu You
{"title":"A New Heterogeneous Hybrid Massive MIMO Receiver With an Intrinsic Ability of Removing Phase Ambiguity of DOA Estimation via Machine Learning","authors":"Feng Shu;Baihua Shi;Yiwen Chen;Jiatong Bai;Yifan Li;Tingting Liu;Zhu Han;Xiaohu You","doi":"10.1109/TMLCN.2024.3506874","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3506874","url":null,"abstract":"Massive multiple input multiple output (MIMO) antenna arrays eventuate a huge amount of circuit costs and computational complexity. To satisfy the needs of high precision and low cost in future green wireless communication, the conventional hybrid analog and digital MIMO receive structure emerges a natural choice. But it exists an issue of the phase ambiguity in direction of arrival (DOA) estimation and requires at least two time-slots to complete one-time DOA measurement with the first time-slot generating the set of candidate solutions and the second one to find a true direction by received beamforming over this set, which will lead to a low time-efficiency. To address this problem,a new heterogeneous sub-connected hybrid analog and digital (\u0000<inline-formula> <tex-math>$mathrm {H}^{2}$ </tex-math></inline-formula>\u0000AD) MIMO structure is proposed with an intrinsic ability of removing phase ambiguity, and then a corresponding new framework is developed to implement a rapid high-precision DOA estimation using only single time-slot. The proposed framework consists of two steps: 1) form a set of candidate solutions using existing methods like MUSIC; 2) find the class of the true solutions and compute the class mean. To infer the set of true solutions, we propose two new clustering methods: weight global minimum distance (WGMD) and weight local minimum distance (WLMD). Next, we also enhance two classic clustering methods: accelerating local weighted k-means (ALW-K-means) and improved density. Additionally, the corresponding closed-form expression of Cramer-Rao lower bound (CRLB) is derived. Simulation results show that the proposed frameworks using the above four clustering can approach the CRLB in almost all signal to noise ratio (SNR) regions except for extremely low SNR (SNR \u0000<inline-formula> <tex-math>$lt -5$ </tex-math></inline-formula>\u0000 dB). Four clustering methods have an accuracy decreasing order as follows: WGMD, improved DBSCAN, ALW-K-means and WLMD.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"17-29"},"PeriodicalIF":0.0,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10767772","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-Supervised Learning via Cross-Prediction-Powered Inference for Wireless Systems 基于交叉预测推理的无线系统半监督学习
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-11-20 DOI: 10.1109/TMLCN.2024.3503543
Houssem Sifaou;Osvaldo Simeone
{"title":"Semi-Supervised Learning via Cross-Prediction-Powered Inference for Wireless Systems","authors":"Houssem Sifaou;Osvaldo Simeone","doi":"10.1109/TMLCN.2024.3503543","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3503543","url":null,"abstract":"In many wireless application scenarios, acquiring labeled data can be prohibitively costly, requiring complex optimization processes or measurement campaigns. Semi-supervised learning leverages unlabeled samples to augment the available dataset by assigning synthetic labels obtained via machine learning (ML)-based predictions. However, treating the synthetic labels as true labels may yield worse-performing models as compared to models trained using only labeled data. Inspired by the recently developed prediction-powered inference (PPI) framework, this work investigates how to leverage the synthetic labels produced by an ML model, while accounting for the inherent bias concerning true labels. To this end, we first review PPI and its recent extensions, namely tuned PPI and cross-prediction-powered inference (CPPI). Then, we introduce two novel variants of PPI. The first, referred to as tuned CPPI, provides CPPI with an additional degree of freedom in adapting to the quality of the ML-based labels. The second, meta-CPPI (MCPPI), extends tuned CPPI via the joint optimization of the ML labeling models and of the parameters of interest. Finally, we showcase two applications of PPI-based techniques in wireless systems, namely beam alignment based on channel knowledge maps in millimeter-wave systems and received signal strength information-based indoor localization. Simulation results show the advantages of PPI-based techniques over conventional approaches that rely solely on labeled data or that apply standard pseudo-labeling strategies from semi-supervised learning. Furthermore, the proposed tuned CPPI method is observed to guarantee the best performance among all benchmark schemes, especially in the regime of limited labeled data.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"30-44"},"PeriodicalIF":0.0,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10758826","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142844291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement-Learning-Based Trajectory Design and Phase-Shift Control in UAV-Mounted-RIS Communications 基于强化学习的无人机- ris通信轨迹设计与相移控制
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-11-19 DOI: 10.1109/TMLCN.2024.3502576
Tianjiao Sun;Sixing Yin;Li Deng;F. Richard Yu
{"title":"Reinforcement-Learning-Based Trajectory Design and Phase-Shift Control in UAV-Mounted-RIS Communications","authors":"Tianjiao Sun;Sixing Yin;Li Deng;F. Richard Yu","doi":"10.1109/TMLCN.2024.3502576","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3502576","url":null,"abstract":"Taking advantages of both unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs), UAV-mounted-RIS systems are expected to enhance transmission performance in complicated wireless environments. In this paper, we focus on system design for a UAV-mounted-RIS system and investigate joint optimization for the RIS’s phase shift and the UAV’s trajectory. To cope with the practical issue of inaccessible information on the user terminals’ (UTs) location and channel state, a reinforcement learning (RL)-based solution is proposed to find the optimal policy with finite steps of “trial-and-error”. As the action space is continuous, the deep deterministic policy gradient (DDPG) algorithm is applied to train the RL model. However, the online interaction between the agent and environment may lead to instability during the training and the assumption of (first-order) Markovian state transition could be impractical in real-world problems. Therefore, the decision transformer (DT) algorithm is employed as an alternative for RL model training to adapt to more general situations of state transition. Experimental results demonstrate that the proposed RL solutions are highly efficient in model training along with acceptable performance close to the benchmark, which relies on conventional optimization algorithms with the UT’s locations and channel parameters explicitly known beforehand.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"163-175"},"PeriodicalIF":0.0,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10758222","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A2PC: Augmented Advantage Pointer-Critic Model for Low Latency on Mobile IoT With Edge Computing A2PC:基于边缘计算的移动物联网低延迟增强优势指针批判模型
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-11-18 DOI: 10.1109/TMLCN.2024.3501217
Rodrigo Carvalho;Faroq Al-Tam;Noélia Correia
{"title":"A2PC: Augmented Advantage Pointer-Critic Model for Low Latency on Mobile IoT With Edge Computing","authors":"Rodrigo Carvalho;Faroq Al-Tam;Noélia Correia","doi":"10.1109/TMLCN.2024.3501217","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3501217","url":null,"abstract":"As a growing trend, edge computing infrastructures are starting to be integrated with Internet of Things (IoT) systems to facilitate time-critical applications. These systems often require the processing of data with limited usefulness in time, so the edge becomes vital in the development of such reactive IoT applications with real-time requirements. Although different architectural designs will always have advantages and disadvantages, mobile gateways appear to be particularly relevant in enabling this integration with the edge, particularly in the context of wide area networks with occasional data generation. In these scenarios, mobility planning is necessary, as aspects of the technology need to be aligned with the temporal needs of an application. The nature of this planning problem makes cutting-edge deep reinforcement learning (DRL) techniques useful in solving pertinent issues, such as having to deal with multiple dimensions in the action space while aiming for optimum levels of system performance. This article presents a novel scalable DRL model that incorporates a pointer-network (Ptr-Net) and an actor-critic algorithm to handle complex action spaces. The model synchronously determines the gateway location and visit time. Ultimately, the gateways are able to attain high-quality trajectory planning with reduced latency.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"1-16"},"PeriodicalIF":0.0,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10755120","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Power Allocation in HAPs Assisted LEO Satellite Communications 优化辅助低地轨道卫星通信的 HAP 功率分配
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-11-04 DOI: 10.1109/TMLCN.2024.3491054
Zain Ali;Zouheir Rezki;Mohamed-Slim Alouini
{"title":"Optimizing Power Allocation in HAPs Assisted LEO Satellite Communications","authors":"Zain Ali;Zouheir Rezki;Mohamed-Slim Alouini","doi":"10.1109/TMLCN.2024.3491054","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3491054","url":null,"abstract":"The next generation of communication devices will require robust connectivity for millions of ground devices such as sensors or mobile devices in remote or disaster-stricken areas to be connected to the network. Non-terrestrial network (NTN) nodes can play a vital role in fulfilling these requirements. Specifically, low-earth orbiting (LEO) satellites have emerged as an efficient and cost-effective technique to connect devices over long distances through space. However, due to their low power and environmental limitations, LEO satellites may require assistance from aerial devices such as high-altitude platforms (HAPs) or unmanned aerial vehicles to forward their data to the ground devices. Moreover, the limited power available at the NTNs makes it crucial to utilize available resources efficiently. In this paper, we present a model in which a LEO satellite communicates with multiple ground devices with the help of HAPs that relay LEO data to the ground devices. We formulate the problem of optimizing power allocation at the LEO satellite and all the HAPs to maximize the sum-rate of the system. To take advantage of the benefits of free-space optical (FSO) communication in satellites, we consider the LEO transmitting data to the HAPs on FSO links, which are then broadcast to the connected ground devices on radio frequency channels. We transform the complex non-convex problem into a convex form and compute the Karush-Kuhn-Tucker (KKT) conditions-based solution of the problem for power allocation at the LEO satellite and HAPs. Then, to reduce computation time, we propose a soft actor-critic (SAC) reinforcement learning (RL) framework that provides the solution in significantly less time while delivering comparable performance to the KKT scheme. Our simulation results demonstrate that the proposed solutions provide excellent performance and are scalable to any number of HAPs and ground devices in the system.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1661-1677"},"PeriodicalIF":0.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10741546","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-Aided Outdoor Localization in Commercial 5G NR Systems 商用 5G NR 系统中的注意力辅助室外定位
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-11-01 DOI: 10.1109/TMLCN.2024.3490496
Guoda Tian;Dino Pjanić;Xuesong Cai;Bo Bernhardsson;Fredrik Tufvesson
{"title":"Attention-Aided Outdoor Localization in Commercial 5G NR Systems","authors":"Guoda Tian;Dino Pjanić;Xuesong Cai;Bo Bernhardsson;Fredrik Tufvesson","doi":"10.1109/TMLCN.2024.3490496","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3490496","url":null,"abstract":"The integration of high-precision cellular localization and machine learning (ML) is considered a cornerstone technique in future cellular navigation systems, offering unparalleled accuracy and functionality. This study focuses on localization based on uplink channel measurements in a fifth-generation (5G) new radio (NR) system. An attention-aided ML-based single-snapshot localization pipeline is presented, which consists of several cascaded blocks, namely a signal processing block, an attention-aided block, and an uncertainty estimation block. Specifically, the signal processing block generates an impulse response beam matrix for all beams. The attention-aided block trains on the channel impulse responses using an attention-aided network, which captures the correlation between impulse responses for different beams. The uncertainty estimation block predicts the probability density function of the user equipment (UE) position, thereby also indicating the confidence level of the localization result. Two representative uncertainty estimation techniques, the negative log-likelihood and the regression-by-classification techniques, are applied and compared. Furthermore, for dynamic measurements with multiple snapshots available, we combine the proposed pipeline with a Kalman filter to enhance localization accuracy. To evaluate our approach, we extract channel impulse responses for different beams from a commercial base station. The outdoor measurement campaign covers Line-of-Sight (LoS), Non Line-of-Sight (NLoS), and a mix of LoS and NLoS scenarios. The results show that sub-meter localization accuracy can be achieved.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1678-1692"},"PeriodicalIF":0.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10741343","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142694615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Information Bottleneck-Based Domain Adaptation for Hybrid Deep Learning in Scalable Network Slicing 可扩展网络切片中基于信息瓶颈的混合深度学习领域适应性研究
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-10-24 DOI: 10.1109/TMLCN.2024.3485520
Tianlun Hu;Qi Liao;Qiang Liu;Georg Carle
{"title":"Information Bottleneck-Based Domain Adaptation for Hybrid Deep Learning in Scalable Network Slicing","authors":"Tianlun Hu;Qi Liao;Qiang Liu;Georg Carle","doi":"10.1109/TMLCN.2024.3485520","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3485520","url":null,"abstract":"Network slicing enables operators to efficiently support diverse applications on a shared infrastructure. However, the evolving complexity of networks, compounded by inter-cell interference, necessitates agile and adaptable resource management. While deep learning offers solutions for coping with complexity, its adaptability to dynamic configurations remains limited. In this paper, we propose a novel hybrid deep learning algorithm called IDLA (integrated deep learning with the Lagrangian method). This integrated approach aims to enhance the scalability, flexibility, and robustness of slicing resource allocation solutions by harnessing the high approximation capability of deep learning and the strong generalization of classical non-linear optimization methods. Then, we introduce a variational information bottleneck (VIB)-assisted domain adaptation (DA) approach to enhance integrated deep learning and Lagrangian method (IDLA)’s adaptability across diverse network environments and conditions. We propose pre-training a variational information bottleneck (VIB)-based Quality of Service (QoS) estimator, using slice-specific inputs shared across all source domain slices. Each target domain slice can deploy this estimator to predict its QoS and optimize slice resource allocation using the IDLA algorithm. This VIB-based estimator is continuously fine-tuned with a mixture of samples from both the source and target domains until convergence. Evaluating on a multi-cell network with time-varying slice configurations, the VIB-enhanced IDLA algorithm outperforms baselines such as heuristic and deep reinforcement learning-based solutions, achieving twice the convergence speed and 16.52% higher asymptotic performance after slicing configuration changes. Transferability assessment demonstrates a 25.66% improvement in estimation accuracy with VIB, especially in scenarios with significant domain gaps, highlighting its robustness and effectiveness across diverse domains.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1642-1660"},"PeriodicalIF":0.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10734592","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Polarization-Aware Channel State Prediction Using Phasor Quaternion Neural Networks 利用相位四元数神经网络进行极化感知信道状态预测
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-10-23 DOI: 10.1109/TMLCN.2024.3485521
Anzhe Ye;Haotian Chen;Ryo Natsuaki;Akira Hirose
{"title":"Polarization-Aware Channel State Prediction Using Phasor Quaternion Neural Networks","authors":"Anzhe Ye;Haotian Chen;Ryo Natsuaki;Akira Hirose","doi":"10.1109/TMLCN.2024.3485521","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3485521","url":null,"abstract":"The performance of a wireless communication system depends to a large extent on the wireless channel. Due to the multipath fading environment during the radio wave propagation, channel prediction plays a vital role to enable adaptive transmission for wireless communication systems. Predicting various channel characteristics by using neural networks can help address more complex communication environments. However, achieving this goal typically requires the simultaneous use of multiple distinct neural models, which is undoubtedly unaffordable for mobile communications. Therefore, it is necessary to enable a simpler structure to simultaneously predict multiple channel characteristics. In this paper, we propose a fading channel prediction method using phasor quaternion neural networks (PQNNs) to predict the polarization states, with phase information involved to enhance the channel compensation ability. We evaluate the performance of the proposed PQNN method in two different fading situations in an actual environment, and we find that the proposed scheme provides 2.8 dB and 4.0 dB improvements at bit error rate (BER) of \u0000<inline-formula> <tex-math>$10^{-4}$ </tex-math></inline-formula>\u0000, showing better BER performance in light and serious fading situations, respectively. This work also reveals that by treating polarization information and phase information as a single entity, the model can leverage their physical correlation to achieve improved performance.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1628-1641"},"PeriodicalIF":0.0,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10731896","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TWIRLD: Transformer Generated Terahertz Waveform for Improved Radio Link Distance TWIRLD:用于改善无线电链路距离的变压器产生的太赫兹波形
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-10-17 DOI: 10.1109/TMLCN.2024.3483111
Shuvam Chakraborty;Claire Parisi;Dola Saha;Ngwe Thawdar
{"title":"TWIRLD: Transformer Generated Terahertz Waveform for Improved Radio Link Distance","authors":"Shuvam Chakraborty;Claire Parisi;Dola Saha;Ngwe Thawdar","doi":"10.1109/TMLCN.2024.3483111","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3483111","url":null,"abstract":"terahertz (THz) band communication is envisioned as one of the leading technologies to meet the exponentially growing data rate requirements of emerging and future wireless communication networks. Utilizing the contiguous bandwidth available at THz frequencies requires a transceiver design tailored to tackle issues existing at these frequencies such as strong propagation and absorption loss, small scale fading (e.g. scattering, reflection, refraction), hardware non-linearity, etc. In prior works, multicarrier waveforms (e.g., Orthogonal Frequency Division Multiplexing (OFDM)) are shown to be efficient in tackling some of these issues. However, OFDM introduces a drawback in the form of peak-to-average power ratio (PAPR) which, compounded with strong propagation and absorption loss and high noise power due to large bandwidth at THz and sub-THz frequencies, severely limits link distances and, in turn, capacity, preventing efficient bandwidth usage. In this work, we propose \u0000<monospace>TWIRLD</monospace>\u0000 - a deep learning (DL)-based joint optimization method, modeled and implemented as components of end-to-end transceiver chain. \u0000<monospace>TWIRLD</monospace>\u0000 performs a symbol remapping at baseband of OFDM signals, which increases average transmit power while also optimizing the bit error rate (BER). We provide theoretical analysis, statistical equivalence of \u0000<monospace>TWIRLD</monospace>\u0000 to the ideal receiver, and comprehensive complexity and footprint estimates. We validate \u0000<monospace>TWIRLD</monospace>\u0000 in simulation showing link distance improvement of up to 91% and compare the results with legacy and state of the art methods and their enhanced versions. Finally, \u0000<monospace>TWIRLD</monospace>\u0000 is validated with over the air (OTA) communication using a state-of-the-art testbed at 140 GHz up to a bandwidth of 5 GHz where we observe improvement of up to 79% in link distance accommodating for practical channel and other transmission losses.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1595-1614"},"PeriodicalIF":0.0,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10720922","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142550544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recursive GNNs for Learning Precoding Policies With Size-Generalizability 用于学习具有大小通用性的预编码策略的递归 GNNs
IEEE Transactions on Machine Learning in Communications and Networking Pub Date : 2024-10-14 DOI: 10.1109/TMLCN.2024.3480044
Jia Guo;Chenyang Yang
{"title":"Recursive GNNs for Learning Precoding Policies With Size-Generalizability","authors":"Jia Guo;Chenyang Yang","doi":"10.1109/TMLCN.2024.3480044","DOIUrl":"https://doi.org/10.1109/TMLCN.2024.3480044","url":null,"abstract":"Graph neural networks (GNNs) have been shown promising in optimizing power allocation and link scheduling with good size generalizability and low training complexity. These merits are important for learning wireless policies under dynamic environments, which partially come from the matched permutation equivariance (PE) properties of the GNNs to the policies to be learned. Nonetheless, it has been noticed in literature that only satisfying the PE property of a precoding policy in multi-antenna systems cannot ensure a GNN for learning precoding to be generalizable to the unseen problem scales. Incorporating models with GNNs helps improve size generalizability, which however is only applicable to specific problems, settings, and algorithms. In this paper, we propose a framework of size generalizable GNNs for learning precoding policies that are purely data-driven and can learn wireless policies including but not limited to baseband and hybrid precoding in multi-user multi-antenna systems. To this end, we first find a special structure of each iteration of several numerical algorithms for optimizing precoding, from which we identify the key characteristics of a GNN that affect its size generalizability. Then, we design size-generalizable GNNs that are with these key characteristics and satisfy the PE properties of precoding policies in a recursive manner. Simulation results show that the proposed GNNs can be well-generalized to the number of users for learning baseband and hybrid precoding policies, require much fewer samples than existing GNNs and shorter inference time than numerical algorithms to achieve the same performance.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"1558-1579"},"PeriodicalIF":0.0,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10716720","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142540477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信