强干扰条件下基于轻量变压器网络的调制自动识别

IF 2 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Dingrui Liu, Pengli Liu, Jialin Chen, Ke Xiong, Bo Jing, Dongsheng Liao
{"title":"强干扰条件下基于轻量变压器网络的调制自动识别","authors":"Dingrui Liu,&nbsp;Pengli Liu,&nbsp;Jialin Chen,&nbsp;Ke Xiong,&nbsp;Bo Jing,&nbsp;Dongsheng Liao","doi":"10.1002/eng2.70405","DOIUrl":null,"url":null,"abstract":"<p>In complex environments with strong electromagnetic interference, which are characterized by high noise levels and low signal-to-noise ratios (SNRs), deep learning improves the efficiency and accuracy of automatic modulation recognition (AMR) in electronic reconnaissance operations. The deep-learning architecture Transformer, a prominent neural network model, captures global feature dependencies in parallel through a multi-head attention mechanism. This improves both the receptive field and the flexibility of the network. However, Transformer fails to effectively model local, subtle features, and its high computational complexity creates challenges in mobile deployment. To address these limitations under conditions of heavy interference, this paper proposes a mobile convolution self-attention network (MCSAN), which utilizes multiple inverted residual blocks to extract local signal features, reducing the spatial dimensions while increasing the channel dimensions of the feature map. Additionally, a novel global window self-attention (GWSA) block is inserted after different inverted residual blocks to extract global signal features. GWSA reduces computational complexity and achieves higher accuracy than conventional multi-head attention mechanisms. In this paper, we evaluate MCSAN under conditions of severe interference using the RML2016.10a dataset at SNRs as low as −20 dB. Additionally, we analyze the model's architecture, hyperparameters, and confusion matrices. Finally, we compare this model to existing deep learning-based AMR models. Our experimental results demonstrate that MCSAN effectively improves recognition accuracy while requiring considerably fewer computational resources and parameters than current Transformer-based AMR approaches. Notably, MCSAN achieves a recognition accuracy of 53.21% even at an SNR of −20 dB.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":"7 9","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.70405","citationCount":"0","resultStr":"{\"title\":\"Automatic Modulation Recognition Based on a Lightweight Transformer Network Under Strong Interference Conditions\",\"authors\":\"Dingrui Liu,&nbsp;Pengli Liu,&nbsp;Jialin Chen,&nbsp;Ke Xiong,&nbsp;Bo Jing,&nbsp;Dongsheng Liao\",\"doi\":\"10.1002/eng2.70405\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In complex environments with strong electromagnetic interference, which are characterized by high noise levels and low signal-to-noise ratios (SNRs), deep learning improves the efficiency and accuracy of automatic modulation recognition (AMR) in electronic reconnaissance operations. The deep-learning architecture Transformer, a prominent neural network model, captures global feature dependencies in parallel through a multi-head attention mechanism. This improves both the receptive field and the flexibility of the network. However, Transformer fails to effectively model local, subtle features, and its high computational complexity creates challenges in mobile deployment. To address these limitations under conditions of heavy interference, this paper proposes a mobile convolution self-attention network (MCSAN), which utilizes multiple inverted residual blocks to extract local signal features, reducing the spatial dimensions while increasing the channel dimensions of the feature map. Additionally, a novel global window self-attention (GWSA) block is inserted after different inverted residual blocks to extract global signal features. GWSA reduces computational complexity and achieves higher accuracy than conventional multi-head attention mechanisms. In this paper, we evaluate MCSAN under conditions of severe interference using the RML2016.10a dataset at SNRs as low as −20 dB. Additionally, we analyze the model's architecture, hyperparameters, and confusion matrices. Finally, we compare this model to existing deep learning-based AMR models. Our experimental results demonstrate that MCSAN effectively improves recognition accuracy while requiring considerably fewer computational resources and parameters than current Transformer-based AMR approaches. Notably, MCSAN achieves a recognition accuracy of 53.21% even at an SNR of −20 dB.</p>\",\"PeriodicalId\":72922,\"journal\":{\"name\":\"Engineering reports : open access\",\"volume\":\"7 9\",\"pages\":\"\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2025-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.70405\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering reports : open access\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/eng2.70405\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering reports : open access","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/eng2.70405","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

在高噪声、低信噪比的强电磁干扰复杂环境中,深度学习提高了电子侦察行动中自动调制识别的效率和准确性。深度学习架构Transformer是一个著名的神经网络模型,它通过多头注意机制并行捕获全局特征依赖关系。这提高了接受野和网络的灵活性。然而,Transformer无法有效地对局部、细微的特征进行建模,而且它的高计算复杂性给移动部署带来了挑战。为了解决这些在严重干扰条件下的局限性,本文提出了一种移动卷积自关注网络(MCSAN),该网络利用多个反向残差块提取局部信号特征,降低了空间维度,同时增加了特征映射的通道维度。此外,在不同的倒立残差块之后插入一个新的全局窗口自关注(GWSA)块来提取全局信号特征。与传统的多头注意机制相比,GWSA降低了计算复杂度,实现了更高的精度。在本文中,我们使用RML2016.10a数据集在信噪比低至- 20 dB的情况下评估了严重干扰条件下的MCSAN。此外,我们分析了模型的架构、超参数和混淆矩阵。最后,我们将该模型与现有的基于深度学习的AMR模型进行了比较。实验结果表明,与目前基于变压器的AMR方法相比,MCSAN有效地提高了识别精度,同时所需的计算资源和参数大大减少。值得注意的是,即使在信噪比为- 20 dB时,MCSAN的识别准确率也达到了53.21%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Automatic Modulation Recognition Based on a Lightweight Transformer Network Under Strong Interference Conditions

Automatic Modulation Recognition Based on a Lightweight Transformer Network Under Strong Interference Conditions

In complex environments with strong electromagnetic interference, which are characterized by high noise levels and low signal-to-noise ratios (SNRs), deep learning improves the efficiency and accuracy of automatic modulation recognition (AMR) in electronic reconnaissance operations. The deep-learning architecture Transformer, a prominent neural network model, captures global feature dependencies in parallel through a multi-head attention mechanism. This improves both the receptive field and the flexibility of the network. However, Transformer fails to effectively model local, subtle features, and its high computational complexity creates challenges in mobile deployment. To address these limitations under conditions of heavy interference, this paper proposes a mobile convolution self-attention network (MCSAN), which utilizes multiple inverted residual blocks to extract local signal features, reducing the spatial dimensions while increasing the channel dimensions of the feature map. Additionally, a novel global window self-attention (GWSA) block is inserted after different inverted residual blocks to extract global signal features. GWSA reduces computational complexity and achieves higher accuracy than conventional multi-head attention mechanisms. In this paper, we evaluate MCSAN under conditions of severe interference using the RML2016.10a dataset at SNRs as low as −20 dB. Additionally, we analyze the model's architecture, hyperparameters, and confusion matrices. Finally, we compare this model to existing deep learning-based AMR models. Our experimental results demonstrate that MCSAN effectively improves recognition accuracy while requiring considerably fewer computational resources and parameters than current Transformer-based AMR approaches. Notably, MCSAN achieves a recognition accuracy of 53.21% even at an SNR of −20 dB.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.10
自引率
0.00%
发文量
0
审稿时长
19 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信