Underwater Acoustic Target Classification Using Auditory Fusion Features and Efficient Convolutional Attention Network

IF 2.2 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC
Junjie Yang;Zhenyu Zhang;Wei Li;Xiang Wang;Senquan Yang;Chao Yang
{"title":"Underwater Acoustic Target Classification Using Auditory Fusion Features and Efficient Convolutional Attention Network","authors":"Junjie Yang;Zhenyu Zhang;Wei Li;Xiang Wang;Senquan Yang;Chao Yang","doi":"10.1109/LSENS.2025.3541593","DOIUrl":null,"url":null,"abstract":"Underwater acoustic target classification (UATC) aims to identify the type of unknown acoustic sources using passive sonar in oceanic remote sensing scenarios. However, the variability of underwater acoustic environment and the presence of complex background noise pose significant challenges to enhancing the accuracy of UATC. To address these challenges, we develop an innovative deep neural network algorithm integrated by multiscale feature extractor and efficient channel attention mechanism. The proposed algorithm leverages tailored representations and deep learning to enhance adaptability and reliability of UATC performance. We employ auditory cofeatures, such as Mel-frequency cepstral coefficients and Gammatone frequency cepstral coefficients, combined with their first-order and second-order differentials, to capture the dynamic variations and contextual information of underwater acoustic signals in time-frequency domain. In addition, we integrate multiscale convolution with an efficient channel attention mechanism to share and exploit the interrelationship of auditory cofeatures. Experimental validations using ShipsEar and Deepship datasets, along with various noise types, have demonstrated the effectiveness of our algorithm in comparison to state-of-the-art methods.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"9 3","pages":"1-4"},"PeriodicalIF":2.2000,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10884716/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Underwater acoustic target classification (UATC) aims to identify the type of unknown acoustic sources using passive sonar in oceanic remote sensing scenarios. However, the variability of underwater acoustic environment and the presence of complex background noise pose significant challenges to enhancing the accuracy of UATC. To address these challenges, we develop an innovative deep neural network algorithm integrated by multiscale feature extractor and efficient channel attention mechanism. The proposed algorithm leverages tailored representations and deep learning to enhance adaptability and reliability of UATC performance. We employ auditory cofeatures, such as Mel-frequency cepstral coefficients and Gammatone frequency cepstral coefficients, combined with their first-order and second-order differentials, to capture the dynamic variations and contextual information of underwater acoustic signals in time-frequency domain. In addition, we integrate multiscale convolution with an efficient channel attention mechanism to share and exploit the interrelationship of auditory cofeatures. Experimental validations using ShipsEar and Deepship datasets, along with various noise types, have demonstrated the effectiveness of our algorithm in comparison to state-of-the-art methods.
基于听觉融合特征和高效卷积注意网络的水声目标分类
水声目标分类(UATC)旨在利用被动声呐识别海洋遥感场景下未知声源的类型。然而,水声环境的多变性和复杂背景噪声的存在对提高UATC精度提出了重大挑战。为了解决这些挑战,我们开发了一种创新的深度神经网络算法,该算法将多尺度特征提取器和有效的通道关注机制集成在一起。该算法利用定制表示和深度学习来增强UATC性能的适应性和可靠性。我们利用mel频率倒谱系数和Gammatone频率倒谱系数等听觉协特征,结合它们的一阶和二阶微分,捕捉水声信号在时频域的动态变化和上下文信息。此外,我们将多尺度卷积与有效的通道注意机制相结合,以共享和利用听觉共同特征的相互关系。使用ShipsEar和Deepship数据集以及各种噪声类型进行的实验验证表明,与最先进的方法相比,我们的算法是有效的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Sensors Letters
IEEE Sensors Letters Engineering-Electrical and Electronic Engineering
CiteScore
3.50
自引率
7.10%
发文量
194
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信