{"title":"Automatic Modulation Classification via Recurrent Self-Attention with Weight Non-Negative Constraint","authors":"Shilong Zhang, Yu Song, Shubin Wang","doi":"10.1049/cmu2.70025","DOIUrl":null,"url":null,"abstract":"<p>The rapid development of the Internet of Things (IoT) has led to an increasingly prominent issue of spectrum resource scarcity. To effectively address this shortage, automatic modulation classification (AMC) has emerged as one of the critical factors. Most existing deep learning-based AMC methods rely on supervised attention models. However, these approaches have not fully accounted for the inherent characteristics of modulation signals and feature sparsity. In response, this paper proposes a weight non-negative constraint recurrent self-attention (WNRSA) model. This model incorporates a recurrent attention module (RAM) within an autoencoder architecture, creating a recurrent self-attention extraction mechanism that enhances multi-dimensional feature representations. RAM comprises three types of attention modules: spatial, frequency, and temporal. The point attention model (PAM) extracts local spatial information to emphasize critical regions in the image. The frequency attention model (FAM) captures salient features at different scales in the frequency domain, reducing noise sensitivity to details and high-frequency information. The time attention model (TAM) captures temporal information, strengthening the ability to extract dynamic features. Additionally, we introduce weight non-negative constraint and KL-divergence regularization term to optimize the WNRSA model's loss function, achieving sparser feature representations and reducing sensitivity to noise. Experimental results demonstrate that the WNRSA model achieves superior performance across various signal-to-noise ratio (SNR) levels.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"19 1","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.70025","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Communications","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/cmu2.70025","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
The rapid development of the Internet of Things (IoT) has led to an increasingly prominent issue of spectrum resource scarcity. To effectively address this shortage, automatic modulation classification (AMC) has emerged as one of the critical factors. Most existing deep learning-based AMC methods rely on supervised attention models. However, these approaches have not fully accounted for the inherent characteristics of modulation signals and feature sparsity. In response, this paper proposes a weight non-negative constraint recurrent self-attention (WNRSA) model. This model incorporates a recurrent attention module (RAM) within an autoencoder architecture, creating a recurrent self-attention extraction mechanism that enhances multi-dimensional feature representations. RAM comprises three types of attention modules: spatial, frequency, and temporal. The point attention model (PAM) extracts local spatial information to emphasize critical regions in the image. The frequency attention model (FAM) captures salient features at different scales in the frequency domain, reducing noise sensitivity to details and high-frequency information. The time attention model (TAM) captures temporal information, strengthening the ability to extract dynamic features. Additionally, we introduce weight non-negative constraint and KL-divergence regularization term to optimize the WNRSA model's loss function, achieving sparser feature representations and reducing sensitivity to noise. Experimental results demonstrate that the WNRSA model achieves superior performance across various signal-to-noise ratio (SNR) levels.
期刊介绍:
IET Communications covers the fundamental and generic research for a better understanding of communication technologies to harness the signals for better performing communication systems using various wired and/or wireless media. This Journal is particularly interested in research papers reporting novel solutions to the dominating problems of noise, interference, timing and errors for reduction systems deficiencies such as wasting scarce resources such as spectra, energy and bandwidth.
Topics include, but are not limited to:
Coding and Communication Theory;
Modulation and Signal Design;
Wired, Wireless and Optical Communication;
Communication System
Special Issues. Current Call for Papers:
Cognitive and AI-enabled Wireless and Mobile - https://digital-library.theiet.org/files/IET_COM_CFP_CAWM.pdf
UAV-Enabled Mobile Edge Computing - https://digital-library.theiet.org/files/IET_COM_CFP_UAV.pdf