{"title":"一种基于RT-DETR的水下模糊目标检测算法","authors":"Xiao Chen, Xiaoqi Ge, Qi Yang, Haiyan Wang","doi":"10.1002/cpe.70267","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Underwater target detection is crucial for monitoring marine resources and assessing their ecological health. However, the underwater environment is complex and variable, and factors such as light attenuation, scattering, and turbidity often cause optical images to be blurred and target details to be unclear, seriously affecting detection accuracy. Although deep learning-based methods have shown promise in the field of target detection, challenges remain in balancing real-time performance with high-precision detection of blurred targets. In response to the above situation, a novel algorithm is presented for underwater blurred target detection, designed to address the challenge of low detection accuracy resulting from indistinct optical image details in complex underwater environments. The proposed algorithm leverages the Real-Time Detection Transformer (RT-DETR) architecture. First, a lightweight feature extraction module, termed Faster-Rep (FARP), is developed to effectively reduce the model's parameter count while simultaneously enhancing the backbone network's ability to extract salient features from blurred targets. Second, an efficient additive attention module, called AIFI-Efficient Additive Attention (AIFI-EAA), is utilized in the coding phase, which enhances the model's global modeling capability while significantly reducing computational redundancy. Atlast, the Dynamic Cross-Scale Feature Fusion (DyCCFM) module enables dynamic fusion of feature information, thereby preserving critical characteristics of blurred targets and preventing information loss. The proposed algorithm demonstrates excellent detection performance on the URPC2020 dataset, where the mean Average Precision (mAP) is improved by 1.5% and the number of parameters is reduced by 14.5%, which also significantly improves the ability to detect ambiguous targets in intricate underwater environments.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 23-24","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Novel Underwater Blurred Target Detection Algorithm Based on RT-DETR\",\"authors\":\"Xiao Chen, Xiaoqi Ge, Qi Yang, Haiyan Wang\",\"doi\":\"10.1002/cpe.70267\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Underwater target detection is crucial for monitoring marine resources and assessing their ecological health. However, the underwater environment is complex and variable, and factors such as light attenuation, scattering, and turbidity often cause optical images to be blurred and target details to be unclear, seriously affecting detection accuracy. Although deep learning-based methods have shown promise in the field of target detection, challenges remain in balancing real-time performance with high-precision detection of blurred targets. In response to the above situation, a novel algorithm is presented for underwater blurred target detection, designed to address the challenge of low detection accuracy resulting from indistinct optical image details in complex underwater environments. The proposed algorithm leverages the Real-Time Detection Transformer (RT-DETR) architecture. First, a lightweight feature extraction module, termed Faster-Rep (FARP), is developed to effectively reduce the model's parameter count while simultaneously enhancing the backbone network's ability to extract salient features from blurred targets. Second, an efficient additive attention module, called AIFI-Efficient Additive Attention (AIFI-EAA), is utilized in the coding phase, which enhances the model's global modeling capability while significantly reducing computational redundancy. Atlast, the Dynamic Cross-Scale Feature Fusion (DyCCFM) module enables dynamic fusion of feature information, thereby preserving critical characteristics of blurred targets and preventing information loss. The proposed algorithm demonstrates excellent detection performance on the URPC2020 dataset, where the mean Average Precision (mAP) is improved by 1.5% and the number of parameters is reduced by 14.5%, which also significantly improves the ability to detect ambiguous targets in intricate underwater environments.</p>\\n </div>\",\"PeriodicalId\":55214,\"journal\":{\"name\":\"Concurrency and Computation-Practice & Experience\",\"volume\":\"37 23-24\",\"pages\":\"\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2025-09-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Concurrency and Computation-Practice & Experience\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70267\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70267","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
A Novel Underwater Blurred Target Detection Algorithm Based on RT-DETR
Underwater target detection is crucial for monitoring marine resources and assessing their ecological health. However, the underwater environment is complex and variable, and factors such as light attenuation, scattering, and turbidity often cause optical images to be blurred and target details to be unclear, seriously affecting detection accuracy. Although deep learning-based methods have shown promise in the field of target detection, challenges remain in balancing real-time performance with high-precision detection of blurred targets. In response to the above situation, a novel algorithm is presented for underwater blurred target detection, designed to address the challenge of low detection accuracy resulting from indistinct optical image details in complex underwater environments. The proposed algorithm leverages the Real-Time Detection Transformer (RT-DETR) architecture. First, a lightweight feature extraction module, termed Faster-Rep (FARP), is developed to effectively reduce the model's parameter count while simultaneously enhancing the backbone network's ability to extract salient features from blurred targets. Second, an efficient additive attention module, called AIFI-Efficient Additive Attention (AIFI-EAA), is utilized in the coding phase, which enhances the model's global modeling capability while significantly reducing computational redundancy. Atlast, the Dynamic Cross-Scale Feature Fusion (DyCCFM) module enables dynamic fusion of feature information, thereby preserving critical characteristics of blurred targets and preventing information loss. The proposed algorithm demonstrates excellent detection performance on the URPC2020 dataset, where the mean Average Precision (mAP) is improved by 1.5% and the number of parameters is reduced by 14.5%, which also significantly improves the ability to detect ambiguous targets in intricate underwater environments.
期刊介绍:
Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of:
Parallel and distributed computing;
High-performance computing;
Computational and data science;
Artificial intelligence and machine learning;
Big data applications, algorithms, and systems;
Network science;
Ontologies and semantics;
Security and privacy;
Cloud/edge/fog computing;
Green computing; and
Quantum computing.