Junjie Yang;Zhenyu Zhang;Wei Li;Xiang Wang;Senquan Yang;Chao Yang
{"title":"Underwater Acoustic Target Classification Using Auditory Fusion Features and Efficient Convolutional Attention Network","authors":"Junjie Yang;Zhenyu Zhang;Wei Li;Xiang Wang;Senquan Yang;Chao Yang","doi":"10.1109/LSENS.2025.3541593","DOIUrl":null,"url":null,"abstract":"Underwater acoustic target classification (UATC) aims to identify the type of unknown acoustic sources using passive sonar in oceanic remote sensing scenarios. However, the variability of underwater acoustic environment and the presence of complex background noise pose significant challenges to enhancing the accuracy of UATC. To address these challenges, we develop an innovative deep neural network algorithm integrated by multiscale feature extractor and efficient channel attention mechanism. The proposed algorithm leverages tailored representations and deep learning to enhance adaptability and reliability of UATC performance. We employ auditory cofeatures, such as Mel-frequency cepstral coefficients and Gammatone frequency cepstral coefficients, combined with their first-order and second-order differentials, to capture the dynamic variations and contextual information of underwater acoustic signals in time-frequency domain. In addition, we integrate multiscale convolution with an efficient channel attention mechanism to share and exploit the interrelationship of auditory cofeatures. Experimental validations using ShipsEar and Deepship datasets, along with various noise types, have demonstrated the effectiveness of our algorithm in comparison to state-of-the-art methods.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"9 3","pages":"1-4"},"PeriodicalIF":2.2000,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10884716/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Underwater acoustic target classification (UATC) aims to identify the type of unknown acoustic sources using passive sonar in oceanic remote sensing scenarios. However, the variability of underwater acoustic environment and the presence of complex background noise pose significant challenges to enhancing the accuracy of UATC. To address these challenges, we develop an innovative deep neural network algorithm integrated by multiscale feature extractor and efficient channel attention mechanism. The proposed algorithm leverages tailored representations and deep learning to enhance adaptability and reliability of UATC performance. We employ auditory cofeatures, such as Mel-frequency cepstral coefficients and Gammatone frequency cepstral coefficients, combined with their first-order and second-order differentials, to capture the dynamic variations and contextual information of underwater acoustic signals in time-frequency domain. In addition, we integrate multiscale convolution with an efficient channel attention mechanism to share and exploit the interrelationship of auditory cofeatures. Experimental validations using ShipsEar and Deepship datasets, along with various noise types, have demonstrated the effectiveness of our algorithm in comparison to state-of-the-art methods.