OD-DDA:基于双动态适应的可变场景实时目标检测器

IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Mengmei Sang , Shengwei Tian , Long Yu , Xin Fan , Zhezhe Zhu
{"title":"OD-DDA:基于双动态适应的可变场景实时目标检测器","authors":"Mengmei Sang ,&nbsp;Shengwei Tian ,&nbsp;Long Yu ,&nbsp;Xin Fan ,&nbsp;Zhezhe Zhu","doi":"10.1016/j.knosys.2025.113611","DOIUrl":null,"url":null,"abstract":"<div><div>Object detection becomes challenging in variable scenarios, such as when object features change and cluttered backgrounds. We propose an object detector with dual dynamic adaptation (OD-DDA) to address these issues and enhance network performance in complex environments. First, we introduce a dynamic feature adaptation (DFA) module at each stage of the network, utilizing large kernel depthwise separable convolutions to capture multiscale contextual information, thereby enhancing the feature extraction capability of the model and effectively addressing object variations across different scenarios. Next, we design a dynamic fine-grained weight adaptation (DFGWA) module, which could selectively learn the fine-grained features of an image and calculate the corresponding weights before feature aggregation, thereby reducing interference among features and enhancing the model’s responsiveness to targets. Through the synergy of these modules, OD-DDA can flexibly handle the challenges faced during the detection of objects in complex scenarios and significantly improve the inference speed. We conduct rigorous experimental comparisons on five datasets, and the results show that OD-DDA exhibits excellent performance in different scenarios. Especially on the UAVDT dataset, <span><math><mrow><mi>A</mi><msub><mrow><mi>P</mi></mrow><mrow><mn>50</mn></mrow></msub></mrow></math></span> reaches 37.9% and FPS reaches 87.5, proving its ability to balance speed and accuracy.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"320 ","pages":"Article 113611"},"PeriodicalIF":7.2000,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"OD-DDA: Real-Time Object Detector with Dual Dynamic Adaptation in Variable Scenes\",\"authors\":\"Mengmei Sang ,&nbsp;Shengwei Tian ,&nbsp;Long Yu ,&nbsp;Xin Fan ,&nbsp;Zhezhe Zhu\",\"doi\":\"10.1016/j.knosys.2025.113611\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Object detection becomes challenging in variable scenarios, such as when object features change and cluttered backgrounds. We propose an object detector with dual dynamic adaptation (OD-DDA) to address these issues and enhance network performance in complex environments. First, we introduce a dynamic feature adaptation (DFA) module at each stage of the network, utilizing large kernel depthwise separable convolutions to capture multiscale contextual information, thereby enhancing the feature extraction capability of the model and effectively addressing object variations across different scenarios. Next, we design a dynamic fine-grained weight adaptation (DFGWA) module, which could selectively learn the fine-grained features of an image and calculate the corresponding weights before feature aggregation, thereby reducing interference among features and enhancing the model’s responsiveness to targets. Through the synergy of these modules, OD-DDA can flexibly handle the challenges faced during the detection of objects in complex scenarios and significantly improve the inference speed. We conduct rigorous experimental comparisons on five datasets, and the results show that OD-DDA exhibits excellent performance in different scenarios. Especially on the UAVDT dataset, <span><math><mrow><mi>A</mi><msub><mrow><mi>P</mi></mrow><mrow><mn>50</mn></mrow></msub></mrow></math></span> reaches 37.9% and FPS reaches 87.5, proving its ability to balance speed and accuracy.</div></div>\",\"PeriodicalId\":49939,\"journal\":{\"name\":\"Knowledge-Based Systems\",\"volume\":\"320 \",\"pages\":\"Article 113611\"},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2025-05-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Knowledge-Based Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0950705125006574\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125006574","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

在各种情况下,例如当物体特征变化和背景混乱时,物体检测变得具有挑战性。我们提出了一种具有双动态适应(OD-DDA)的目标检测器来解决这些问题,并提高复杂环境下的网络性能。首先,我们在网络的每个阶段引入动态特征自适应(DFA)模块,利用大核深度可分卷积捕获多尺度上下文信息,从而增强模型的特征提取能力,并有效地解决不同场景下对象的变化。其次,我们设计了动态细粒度权值自适应(DFGWA)模块,该模块可以选择性地学习图像的细粒度特征,并在特征聚合之前计算出相应的权值,从而减少特征之间的干扰,增强模型对目标的响应性。通过这些模块的协同作用,OD-DDA可以灵活应对复杂场景中目标检测所面临的挑战,显著提高推理速度。我们在五个数据集上进行了严格的实验比较,结果表明OD-DDA在不同场景下表现出优异的性能。特别是在UAVDT数据集上,AP50达到37.9%,FPS达到87.5,证明了其平衡速度和精度的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
OD-DDA: Real-Time Object Detector with Dual Dynamic Adaptation in Variable Scenes
Object detection becomes challenging in variable scenarios, such as when object features change and cluttered backgrounds. We propose an object detector with dual dynamic adaptation (OD-DDA) to address these issues and enhance network performance in complex environments. First, we introduce a dynamic feature adaptation (DFA) module at each stage of the network, utilizing large kernel depthwise separable convolutions to capture multiscale contextual information, thereby enhancing the feature extraction capability of the model and effectively addressing object variations across different scenarios. Next, we design a dynamic fine-grained weight adaptation (DFGWA) module, which could selectively learn the fine-grained features of an image and calculate the corresponding weights before feature aggregation, thereby reducing interference among features and enhancing the model’s responsiveness to targets. Through the synergy of these modules, OD-DDA can flexibly handle the challenges faced during the detection of objects in complex scenarios and significantly improve the inference speed. We conduct rigorous experimental comparisons on five datasets, and the results show that OD-DDA exhibits excellent performance in different scenarios. Especially on the UAVDT dataset, AP50 reaches 37.9% and FPS reaches 87.5, proving its ability to balance speed and accuracy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Knowledge-Based Systems
Knowledge-Based Systems 工程技术-计算机:人工智能
CiteScore
14.80
自引率
12.50%
发文量
1245
审稿时长
7.8 months
期刊介绍: Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信