Self-Supervised Contrastive Learning for Joint Active and Passive Beamforming in RIS-Assisted MU-MIMO Systems

Zhizhou He;Fabien Héliot;Yi Ma
{"title":"Self-Supervised Contrastive Learning for Joint Active and Passive Beamforming in RIS-Assisted MU-MIMO Systems","authors":"Zhizhou He;Fabien Héliot;Yi Ma","doi":"10.1109/TMLCN.2024.3515913","DOIUrl":null,"url":null,"abstract":"Reconfigurable Intelligent Surfaces (RIS) can enhance system performance at the cost of increased complexity in multi-user MIMO systems. The beamforming options scale with the number of antennas at the base station/RIS. Existing methods for solving this problem tend to use computationally intensive iterative methods that are non-scalable for large RIS-aided MIMO systems. We propose here a novel self-supervised contrastive learning neural network (NN) architecture to optimize the sum spectral efficiency through joint active and passive beamforming design in multi-user RIS-aided MIMO systems. Our scheme utilizes contrastive learning to capture the channel features from augmented channel data and then can be trained to perform beamforming with only 1% of labeled data. The labels are derived through a closed-form optimization algorithm, leveraging a sequential fractional programming approach. Leveraging the proposed self-supervised design helps to greatly reduce the computational complexity during the training phase. Moreover, our proposed model can operate under various noise levels by using data augmentation methods while maintaining a robust out-of-distribution performance under various propagation environments and different signal-to-noise ratios (SNR)s. During training, our proposed network only needs 10% of labeled data to converge when compared to supervised learning. Our trained NN can then achieve performance which is only \n<inline-formula> <tex-math>$~7\\%$ </tex-math></inline-formula>\n and \n<inline-formula> <tex-math>$~2.5\\%$ </tex-math></inline-formula>\n away from mathematical upper bound and fully supervised learning, respectively, with far less computational complexity.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"147-162"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10793234","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Machine Learning in Communications and Networking","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10793234/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Reconfigurable Intelligent Surfaces (RIS) can enhance system performance at the cost of increased complexity in multi-user MIMO systems. The beamforming options scale with the number of antennas at the base station/RIS. Existing methods for solving this problem tend to use computationally intensive iterative methods that are non-scalable for large RIS-aided MIMO systems. We propose here a novel self-supervised contrastive learning neural network (NN) architecture to optimize the sum spectral efficiency through joint active and passive beamforming design in multi-user RIS-aided MIMO systems. Our scheme utilizes contrastive learning to capture the channel features from augmented channel data and then can be trained to perform beamforming with only 1% of labeled data. The labels are derived through a closed-form optimization algorithm, leveraging a sequential fractional programming approach. Leveraging the proposed self-supervised design helps to greatly reduce the computational complexity during the training phase. Moreover, our proposed model can operate under various noise levels by using data augmentation methods while maintaining a robust out-of-distribution performance under various propagation environments and different signal-to-noise ratios (SNR)s. During training, our proposed network only needs 10% of labeled data to converge when compared to supervised learning. Our trained NN can then achieve performance which is only $~7\%$ and $~2.5\%$ away from mathematical upper bound and fully supervised learning, respectively, with far less computational complexity.
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信