大规模MIMO网络中CSI压缩的融合基复值神经网络

C Kiruthika;E. S. Gopi
{"title":"大规模MIMO网络中CSI压缩的融合基复值神经网络","authors":"C Kiruthika;E. S. Gopi","doi":"10.1109/LNET.2024.3512658","DOIUrl":null,"url":null,"abstract":"Deep learning-based CSI compression has shown its efficacy for massive multiple-input multiple-output networks, and on the other hand, federated learning (FL) excels the conventional centralized learning by avoiding privacy leakage issues and training communication overhead. The realization of an FL-based CSI feedback network consumes more computational resources and time, and the continuous reporting of local models to the base station results in overhead. To overcome these issues, in this letter, we propose a FBCNet. The proposed FBCNet combines the advantages of the novel fusion basis (FB) technique and the fully connected complex-valued neural network (CNet) based on gradient (G) and non-gradient (NG) approaches. The experimental results show the advantages of both CNet and FB individually over the existing techniques. FBCNet, the combination of both FB and CNet, outperforms the existing federated averaging-based CNet (FedCNet) with improvement in reconstruction performance, less complexity, reduced training time, and low transmission overhead. For the distributed array-line of sight topology at the compression ratio (CR) of 20:1, it is noted that the NMSE and the cosine similarity of FedCNet-G are −8.2837 dB, 0.9262; FedCNet-NG are −3.5291 dB, 0.8452; proposed FB are −26.8621, 0.9653. Also the NMSE and the cosine similarity of the proposed FBCNet-G are −19.7521, 0.9307; FBCNet-NG are −24.0442, 0.9539 at a high CR of 64:1.","PeriodicalId":100628,"journal":{"name":"IEEE Networking Letters","volume":"6 4","pages":"262-266"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FBCNet: Fusion Basis Complex-Valued Neural Network for CSI Compression in Massive MIMO Networks\",\"authors\":\"C Kiruthika;E. S. Gopi\",\"doi\":\"10.1109/LNET.2024.3512658\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning-based CSI compression has shown its efficacy for massive multiple-input multiple-output networks, and on the other hand, federated learning (FL) excels the conventional centralized learning by avoiding privacy leakage issues and training communication overhead. The realization of an FL-based CSI feedback network consumes more computational resources and time, and the continuous reporting of local models to the base station results in overhead. To overcome these issues, in this letter, we propose a FBCNet. The proposed FBCNet combines the advantages of the novel fusion basis (FB) technique and the fully connected complex-valued neural network (CNet) based on gradient (G) and non-gradient (NG) approaches. The experimental results show the advantages of both CNet and FB individually over the existing techniques. FBCNet, the combination of both FB and CNet, outperforms the existing federated averaging-based CNet (FedCNet) with improvement in reconstruction performance, less complexity, reduced training time, and low transmission overhead. For the distributed array-line of sight topology at the compression ratio (CR) of 20:1, it is noted that the NMSE and the cosine similarity of FedCNet-G are −8.2837 dB, 0.9262; FedCNet-NG are −3.5291 dB, 0.8452; proposed FB are −26.8621, 0.9653. Also the NMSE and the cosine similarity of the proposed FBCNet-G are −19.7521, 0.9307; FBCNet-NG are −24.0442, 0.9539 at a high CR of 64:1.\",\"PeriodicalId\":100628,\"journal\":{\"name\":\"IEEE Networking Letters\",\"volume\":\"6 4\",\"pages\":\"262-266\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-12-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Networking Letters\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10783048/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Networking Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10783048/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

基于深度学习的CSI压缩在大规模多输入多输出网络中已经显示出其有效性,另一方面,联邦学习(FL)通过避免隐私泄露问题和训练通信开销而优于传统的集中式学习。基于fl的CSI反馈网络的实现消耗了更多的计算资源和时间,并且不断向基站报告局部模型导致了开销。为了克服这些问题,在这封信中,我们提议建立FBCNet。提出的FBCNet结合了新型融合基(FB)技术和基于梯度(G)和非梯度(NG)方法的全连接复值神经网络(CNet)的优点。实验结果表明,CNet和FB各自优于现有技术。FBCNet是FB和CNet的结合,优于现有的基于联邦平均的CNet (FedCNet),具有重建性能的提高、更低的复杂性、更少的训练时间和更低的传输开销。对于压缩比(CR)为20:1的分布式阵列瞄准线拓扑,FedCNet-G的NMSE和余弦相似度分别为- 8.2837 dB和0.9262;FedCNet-NG分别为−3.5291 dB, 0.8452;建议FB分别为−26.8621,0.9653。FBCNet-G的NMSE和余弦相似度分别为- 19.7521、0.9307;FBCNet-NG分别为- 24.0442、0.9539,CR为64:1。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
FBCNet: Fusion Basis Complex-Valued Neural Network for CSI Compression in Massive MIMO Networks
Deep learning-based CSI compression has shown its efficacy for massive multiple-input multiple-output networks, and on the other hand, federated learning (FL) excels the conventional centralized learning by avoiding privacy leakage issues and training communication overhead. The realization of an FL-based CSI feedback network consumes more computational resources and time, and the continuous reporting of local models to the base station results in overhead. To overcome these issues, in this letter, we propose a FBCNet. The proposed FBCNet combines the advantages of the novel fusion basis (FB) technique and the fully connected complex-valued neural network (CNet) based on gradient (G) and non-gradient (NG) approaches. The experimental results show the advantages of both CNet and FB individually over the existing techniques. FBCNet, the combination of both FB and CNet, outperforms the existing federated averaging-based CNet (FedCNet) with improvement in reconstruction performance, less complexity, reduced training time, and low transmission overhead. For the distributed array-line of sight topology at the compression ratio (CR) of 20:1, it is noted that the NMSE and the cosine similarity of FedCNet-G are −8.2837 dB, 0.9262; FedCNet-NG are −3.5291 dB, 0.8452; proposed FB are −26.8621, 0.9653. Also the NMSE and the cosine similarity of the proposed FBCNet-G are −19.7521, 0.9307; FBCNet-NG are −24.0442, 0.9539 at a high CR of 64:1.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信