Dengyong Zhang , Chuanzhen Xu , Jiaxin Chen , Lei Wang , Bin Deng
{"title":"YOLO-DC:集成可变形卷积和上下文融合的高性能目标检测","authors":"Dengyong Zhang , Chuanzhen Xu , Jiaxin Chen , Lei Wang , Bin Deng","doi":"10.1016/j.image.2025.117373","DOIUrl":null,"url":null,"abstract":"<div><div>Object detection is a fundamental task in computer vision, but existing methods often concentrate on optimizing model architectures, loss functions, and data preprocessing techniques, while frequently neglecting the potential improvements that advanced convolutional mechanisms can provide. Additionally, increasing the depth of deep learning networks can lead to the loss of essential feature information, highlighting the need for strategies that can further improve model accuracy. This paper introduces YOLO-DC, an algorithm that enhances object detection by incorporating deformable convolution and contextual mechanisms. YOLO-DC integrates a Deformable Convolutional Module (DCM) and a Contextual Information Fusion Downsampling Module (CFD). The DCM employs deformable convolution with multi-scale spatial channel attention to effectively expand the receptive field and enhance feature extraction. In parallel, the CFD module leverages both contextual and local features during downsampling and incorporates global features to enhance joint learning and reduce information loss. Compared to YOLOv8-N, YOLO-DC-N achieves a significant improvement in Average Precision (AP), increasing by 3.5% to reach 40.8% on the Microsoft COCO 2017 dataset, while maintaining a comparable inference time. The model outperforms other state-of-the-art detection algorithms across various datasets, including the RUOD underwater dataset and the PASCAL VOC dataset (VOC2007 + VOC2012). The source code is available at <span><span>https://github.com/Object-Detection-01/YOLO-DC.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117373"},"PeriodicalIF":2.7000,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"YOLO-DC: Integrating deformable convolution and contextual fusion for high-performance object detection\",\"authors\":\"Dengyong Zhang , Chuanzhen Xu , Jiaxin Chen , Lei Wang , Bin Deng\",\"doi\":\"10.1016/j.image.2025.117373\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Object detection is a fundamental task in computer vision, but existing methods often concentrate on optimizing model architectures, loss functions, and data preprocessing techniques, while frequently neglecting the potential improvements that advanced convolutional mechanisms can provide. Additionally, increasing the depth of deep learning networks can lead to the loss of essential feature information, highlighting the need for strategies that can further improve model accuracy. This paper introduces YOLO-DC, an algorithm that enhances object detection by incorporating deformable convolution and contextual mechanisms. YOLO-DC integrates a Deformable Convolutional Module (DCM) and a Contextual Information Fusion Downsampling Module (CFD). The DCM employs deformable convolution with multi-scale spatial channel attention to effectively expand the receptive field and enhance feature extraction. In parallel, the CFD module leverages both contextual and local features during downsampling and incorporates global features to enhance joint learning and reduce information loss. Compared to YOLOv8-N, YOLO-DC-N achieves a significant improvement in Average Precision (AP), increasing by 3.5% to reach 40.8% on the Microsoft COCO 2017 dataset, while maintaining a comparable inference time. The model outperforms other state-of-the-art detection algorithms across various datasets, including the RUOD underwater dataset and the PASCAL VOC dataset (VOC2007 + VOC2012). The source code is available at <span><span>https://github.com/Object-Detection-01/YOLO-DC.git</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":49521,\"journal\":{\"name\":\"Signal Processing-Image Communication\",\"volume\":\"138 \",\"pages\":\"Article 117373\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2025-06-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Signal Processing-Image Communication\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0923596525001195\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Signal Processing-Image Communication","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0923596525001195","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
摘要
目标检测是计算机视觉的一项基本任务,但现有的方法通常集中在优化模型架构、损失函数和数据预处理技术上,而经常忽略了先进卷积机制可以提供的潜在改进。此外,增加深度学习网络的深度可能会导致基本特征信息的丢失,这突出了对能够进一步提高模型准确性的策略的需求。本文介绍了一种通过结合可变形卷积和上下文机制来增强目标检测的算法YOLO-DC。YOLO-DC集成了可变形卷积模块(DCM)和上下文信息融合下采样模块(CFD)。DCM采用多尺度空间通道关注的可变形卷积,有效地扩展了接收野,增强了特征提取。同时,CFD模块在下采样过程中利用了上下文和局部特征,并结合了全局特征,以增强联合学习并减少信息损失。与YOLOv8-N相比,yolov - dc - n在平均精度(AP)方面取得了显着提高,在Microsoft COCO 2017数据集上提高了3.5%,达到40.8%,同时保持了相当的推理时间。该模型在各种数据集上优于其他最先进的检测算法,包括RUOD水下数据集和PASCAL VOC数据集(VOC2007 + VOC2012)。源代码可从https://github.com/Object-Detection-01/YOLO-DC.git获得。
YOLO-DC: Integrating deformable convolution and contextual fusion for high-performance object detection
Object detection is a fundamental task in computer vision, but existing methods often concentrate on optimizing model architectures, loss functions, and data preprocessing techniques, while frequently neglecting the potential improvements that advanced convolutional mechanisms can provide. Additionally, increasing the depth of deep learning networks can lead to the loss of essential feature information, highlighting the need for strategies that can further improve model accuracy. This paper introduces YOLO-DC, an algorithm that enhances object detection by incorporating deformable convolution and contextual mechanisms. YOLO-DC integrates a Deformable Convolutional Module (DCM) and a Contextual Information Fusion Downsampling Module (CFD). The DCM employs deformable convolution with multi-scale spatial channel attention to effectively expand the receptive field and enhance feature extraction. In parallel, the CFD module leverages both contextual and local features during downsampling and incorporates global features to enhance joint learning and reduce information loss. Compared to YOLOv8-N, YOLO-DC-N achieves a significant improvement in Average Precision (AP), increasing by 3.5% to reach 40.8% on the Microsoft COCO 2017 dataset, while maintaining a comparable inference time. The model outperforms other state-of-the-art detection algorithms across various datasets, including the RUOD underwater dataset and the PASCAL VOC dataset (VOC2007 + VOC2012). The source code is available at https://github.com/Object-Detection-01/YOLO-DC.git.
期刊介绍:
Signal Processing: Image Communication is an international journal for the development of the theory and practice of image communication. Its primary objectives are the following:
To present a forum for the advancement of theory and practice of image communication.
To stimulate cross-fertilization between areas similar in nature which have traditionally been separated, for example, various aspects of visual communications and information systems.
To contribute to a rapid information exchange between the industrial and academic environments.
The editorial policy and the technical content of the journal are the responsibility of the Editor-in-Chief, the Area Editors and the Advisory Editors. The Journal is self-supporting from subscription income and contains a minimum amount of advertisements. Advertisements are subject to the prior approval of the Editor-in-Chief. The journal welcomes contributions from every country in the world.
Signal Processing: Image Communication publishes articles relating to aspects of the design, implementation and use of image communication systems. The journal features original research work, tutorial and review articles, and accounts of practical developments.
Subjects of interest include image/video coding, 3D video representations and compression, 3D graphics and animation compression, HDTV and 3DTV systems, video adaptation, video over IP, peer-to-peer video networking, interactive visual communication, multi-user video conferencing, wireless video broadcasting and communication, visual surveillance, 2D and 3D image/video quality measures, pre/post processing, video restoration and super-resolution, multi-camera video analysis, motion analysis, content-based image/video indexing and retrieval, face and gesture processing, video synthesis, 2D and 3D image/video acquisition and display technologies, architectures for image/video processing and communication.