{"title":"Self-Supervised Contrastive Learning for Joint Active and Passive Beamforming in RIS-Assisted MU-MIMO Systems","authors":"Zhizhou He;Fabien Héliot;Yi Ma","doi":"10.1109/TMLCN.2024.3515913","DOIUrl":null,"url":null,"abstract":"Reconfigurable Intelligent Surfaces (RIS) can enhance system performance at the cost of increased complexity in multi-user MIMO systems. The beamforming options scale with the number of antennas at the base station/RIS. Existing methods for solving this problem tend to use computationally intensive iterative methods that are non-scalable for large RIS-aided MIMO systems. We propose here a novel self-supervised contrastive learning neural network (NN) architecture to optimize the sum spectral efficiency through joint active and passive beamforming design in multi-user RIS-aided MIMO systems. Our scheme utilizes contrastive learning to capture the channel features from augmented channel data and then can be trained to perform beamforming with only 1% of labeled data. The labels are derived through a closed-form optimization algorithm, leveraging a sequential fractional programming approach. Leveraging the proposed self-supervised design helps to greatly reduce the computational complexity during the training phase. Moreover, our proposed model can operate under various noise levels by using data augmentation methods while maintaining a robust out-of-distribution performance under various propagation environments and different signal-to-noise ratios (SNR)s. During training, our proposed network only needs 10% of labeled data to converge when compared to supervised learning. Our trained NN can then achieve performance which is only \n<inline-formula> <tex-math>$~7\\%$ </tex-math></inline-formula>\n and \n<inline-formula> <tex-math>$~2.5\\%$ </tex-math></inline-formula>\n away from mathematical upper bound and fully supervised learning, respectively, with far less computational complexity.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"147-162"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10793234","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Machine Learning in Communications and Networking","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10793234/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Reconfigurable Intelligent Surfaces (RIS) can enhance system performance at the cost of increased complexity in multi-user MIMO systems. The beamforming options scale with the number of antennas at the base station/RIS. Existing methods for solving this problem tend to use computationally intensive iterative methods that are non-scalable for large RIS-aided MIMO systems. We propose here a novel self-supervised contrastive learning neural network (NN) architecture to optimize the sum spectral efficiency through joint active and passive beamforming design in multi-user RIS-aided MIMO systems. Our scheme utilizes contrastive learning to capture the channel features from augmented channel data and then can be trained to perform beamforming with only 1% of labeled data. The labels are derived through a closed-form optimization algorithm, leveraging a sequential fractional programming approach. Leveraging the proposed self-supervised design helps to greatly reduce the computational complexity during the training phase. Moreover, our proposed model can operate under various noise levels by using data augmentation methods while maintaining a robust out-of-distribution performance under various propagation environments and different signal-to-noise ratios (SNR)s. During training, our proposed network only needs 10% of labeled data to converge when compared to supervised learning. Our trained NN can then achieve performance which is only
$~7\%$
and
$~2.5\%$
away from mathematical upper bound and fully supervised learning, respectively, with far less computational complexity.