Dingrui Liu, Pengli Liu, Jialin Chen, Ke Xiong, Bo Jing, Dongsheng Liao
{"title":"强干扰条件下基于轻量变压器网络的调制自动识别","authors":"Dingrui Liu, Pengli Liu, Jialin Chen, Ke Xiong, Bo Jing, Dongsheng Liao","doi":"10.1002/eng2.70405","DOIUrl":null,"url":null,"abstract":"<p>In complex environments with strong electromagnetic interference, which are characterized by high noise levels and low signal-to-noise ratios (SNRs), deep learning improves the efficiency and accuracy of automatic modulation recognition (AMR) in electronic reconnaissance operations. The deep-learning architecture Transformer, a prominent neural network model, captures global feature dependencies in parallel through a multi-head attention mechanism. This improves both the receptive field and the flexibility of the network. However, Transformer fails to effectively model local, subtle features, and its high computational complexity creates challenges in mobile deployment. To address these limitations under conditions of heavy interference, this paper proposes a mobile convolution self-attention network (MCSAN), which utilizes multiple inverted residual blocks to extract local signal features, reducing the spatial dimensions while increasing the channel dimensions of the feature map. Additionally, a novel global window self-attention (GWSA) block is inserted after different inverted residual blocks to extract global signal features. GWSA reduces computational complexity and achieves higher accuracy than conventional multi-head attention mechanisms. In this paper, we evaluate MCSAN under conditions of severe interference using the RML2016.10a dataset at SNRs as low as −20 dB. Additionally, we analyze the model's architecture, hyperparameters, and confusion matrices. Finally, we compare this model to existing deep learning-based AMR models. Our experimental results demonstrate that MCSAN effectively improves recognition accuracy while requiring considerably fewer computational resources and parameters than current Transformer-based AMR approaches. Notably, MCSAN achieves a recognition accuracy of 53.21% even at an SNR of −20 dB.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":"7 9","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.70405","citationCount":"0","resultStr":"{\"title\":\"Automatic Modulation Recognition Based on a Lightweight Transformer Network Under Strong Interference Conditions\",\"authors\":\"Dingrui Liu, Pengli Liu, Jialin Chen, Ke Xiong, Bo Jing, Dongsheng Liao\",\"doi\":\"10.1002/eng2.70405\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In complex environments with strong electromagnetic interference, which are characterized by high noise levels and low signal-to-noise ratios (SNRs), deep learning improves the efficiency and accuracy of automatic modulation recognition (AMR) in electronic reconnaissance operations. The deep-learning architecture Transformer, a prominent neural network model, captures global feature dependencies in parallel through a multi-head attention mechanism. This improves both the receptive field and the flexibility of the network. However, Transformer fails to effectively model local, subtle features, and its high computational complexity creates challenges in mobile deployment. To address these limitations under conditions of heavy interference, this paper proposes a mobile convolution self-attention network (MCSAN), which utilizes multiple inverted residual blocks to extract local signal features, reducing the spatial dimensions while increasing the channel dimensions of the feature map. Additionally, a novel global window self-attention (GWSA) block is inserted after different inverted residual blocks to extract global signal features. GWSA reduces computational complexity and achieves higher accuracy than conventional multi-head attention mechanisms. In this paper, we evaluate MCSAN under conditions of severe interference using the RML2016.10a dataset at SNRs as low as −20 dB. Additionally, we analyze the model's architecture, hyperparameters, and confusion matrices. Finally, we compare this model to existing deep learning-based AMR models. Our experimental results demonstrate that MCSAN effectively improves recognition accuracy while requiring considerably fewer computational resources and parameters than current Transformer-based AMR approaches. Notably, MCSAN achieves a recognition accuracy of 53.21% even at an SNR of −20 dB.</p>\",\"PeriodicalId\":72922,\"journal\":{\"name\":\"Engineering reports : open access\",\"volume\":\"7 9\",\"pages\":\"\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2025-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.70405\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering reports : open access\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/eng2.70405\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering reports : open access","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/eng2.70405","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Automatic Modulation Recognition Based on a Lightweight Transformer Network Under Strong Interference Conditions
In complex environments with strong electromagnetic interference, which are characterized by high noise levels and low signal-to-noise ratios (SNRs), deep learning improves the efficiency and accuracy of automatic modulation recognition (AMR) in electronic reconnaissance operations. The deep-learning architecture Transformer, a prominent neural network model, captures global feature dependencies in parallel through a multi-head attention mechanism. This improves both the receptive field and the flexibility of the network. However, Transformer fails to effectively model local, subtle features, and its high computational complexity creates challenges in mobile deployment. To address these limitations under conditions of heavy interference, this paper proposes a mobile convolution self-attention network (MCSAN), which utilizes multiple inverted residual blocks to extract local signal features, reducing the spatial dimensions while increasing the channel dimensions of the feature map. Additionally, a novel global window self-attention (GWSA) block is inserted after different inverted residual blocks to extract global signal features. GWSA reduces computational complexity and achieves higher accuracy than conventional multi-head attention mechanisms. In this paper, we evaluate MCSAN under conditions of severe interference using the RML2016.10a dataset at SNRs as low as −20 dB. Additionally, we analyze the model's architecture, hyperparameters, and confusion matrices. Finally, we compare this model to existing deep learning-based AMR models. Our experimental results demonstrate that MCSAN effectively improves recognition accuracy while requiring considerably fewer computational resources and parameters than current Transformer-based AMR approaches. Notably, MCSAN achieves a recognition accuracy of 53.21% even at an SNR of −20 dB.