Jia Guo , Xinyu Jia , Jinqi Zhu , Xiang Li , Yang Liu , Weijia Feng , Wanli Xue
{"title":"SMANet:序列增强多头注意网络,用于嘈杂计算环境下的鲁棒神经语义学习","authors":"Jia Guo , Xinyu Jia , Jinqi Zhu , Xiang Li , Yang Liu , Weijia Feng , Wanli Xue","doi":"10.1016/j.neucom.2025.131347","DOIUrl":null,"url":null,"abstract":"<div><div>Traditional communication systems often fail to efficiently transmit meaningful information in noisy and dynamic environments, prompting the adoption of neural network architectures in semantic communication to prioritize semantic content over raw data. Existing neural models face persistent challenges in mitigating high noise interference, capturing long-range dependencies in sequences, and preserving semantic fidelity under varying conditions. This paper proposes SMANet, sequence-enhanced multi-head attention network for robust neural semantic learning in noisy computational environments. SMANet integrates multi-head attention mechanisms with a Dilated Normalization Block (DNB)—a specialized neural module for extracting local temporal features and global semantic representations—to enhance sequence processing capabilities, alleviate gradient vanishing/explosion issues during training, and improve network stability. At the transmitter, a neural semantic encoder employs dilated convolutions and normalization for robust feature extraction, paired with a channel encoder to achieve noise resilience; at the receiver, neural decoders precisely reconstruct semantics, facilitating applications in machine learning-driven cognitive systems. Experimental evaluations on AWGN and Rayleigh fading channels demonstrate SMANet’s superior performance, outperforming DeepSC by 23 % in BLEU scores, achieving a sentence similarity of 0.91 at SNR=18 dB, and maintaining 85 % semantic fidelity at SNR <span><math><mo><</mo></math></span> 6 dB, highlighting its potential for neurocomputing in resource-constrained networks.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"655 ","pages":"Article 131347"},"PeriodicalIF":6.5000,"publicationDate":"2025-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SMANet: Sequence-enhanced multi-head attention network for robust neural semantic learning in noisy computational environments\",\"authors\":\"Jia Guo , Xinyu Jia , Jinqi Zhu , Xiang Li , Yang Liu , Weijia Feng , Wanli Xue\",\"doi\":\"10.1016/j.neucom.2025.131347\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Traditional communication systems often fail to efficiently transmit meaningful information in noisy and dynamic environments, prompting the adoption of neural network architectures in semantic communication to prioritize semantic content over raw data. Existing neural models face persistent challenges in mitigating high noise interference, capturing long-range dependencies in sequences, and preserving semantic fidelity under varying conditions. This paper proposes SMANet, sequence-enhanced multi-head attention network for robust neural semantic learning in noisy computational environments. SMANet integrates multi-head attention mechanisms with a Dilated Normalization Block (DNB)—a specialized neural module for extracting local temporal features and global semantic representations—to enhance sequence processing capabilities, alleviate gradient vanishing/explosion issues during training, and improve network stability. At the transmitter, a neural semantic encoder employs dilated convolutions and normalization for robust feature extraction, paired with a channel encoder to achieve noise resilience; at the receiver, neural decoders precisely reconstruct semantics, facilitating applications in machine learning-driven cognitive systems. Experimental evaluations on AWGN and Rayleigh fading channels demonstrate SMANet’s superior performance, outperforming DeepSC by 23 % in BLEU scores, achieving a sentence similarity of 0.91 at SNR=18 dB, and maintaining 85 % semantic fidelity at SNR <span><math><mo><</mo></math></span> 6 dB, highlighting its potential for neurocomputing in resource-constrained networks.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"655 \",\"pages\":\"Article 131347\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2025-08-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231225020193\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225020193","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
SMANet: Sequence-enhanced multi-head attention network for robust neural semantic learning in noisy computational environments
Traditional communication systems often fail to efficiently transmit meaningful information in noisy and dynamic environments, prompting the adoption of neural network architectures in semantic communication to prioritize semantic content over raw data. Existing neural models face persistent challenges in mitigating high noise interference, capturing long-range dependencies in sequences, and preserving semantic fidelity under varying conditions. This paper proposes SMANet, sequence-enhanced multi-head attention network for robust neural semantic learning in noisy computational environments. SMANet integrates multi-head attention mechanisms with a Dilated Normalization Block (DNB)—a specialized neural module for extracting local temporal features and global semantic representations—to enhance sequence processing capabilities, alleviate gradient vanishing/explosion issues during training, and improve network stability. At the transmitter, a neural semantic encoder employs dilated convolutions and normalization for robust feature extraction, paired with a channel encoder to achieve noise resilience; at the receiver, neural decoders precisely reconstruct semantics, facilitating applications in machine learning-driven cognitive systems. Experimental evaluations on AWGN and Rayleigh fading channels demonstrate SMANet’s superior performance, outperforming DeepSC by 23 % in BLEU scores, achieving a sentence similarity of 0.91 at SNR=18 dB, and maintaining 85 % semantic fidelity at SNR 6 dB, highlighting its potential for neurocomputing in resource-constrained networks.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.