{"title":"DFDNet: Deep Feature Decoupling for Oriented Object Detection","authors":"Yuhan Sun;Shengyang Li","doi":"10.1109/LGRS.2025.3560388","DOIUrl":null,"url":null,"abstract":"Objects in remote sensing images exhibit diverse orientations. Current oriented object detection (OOD) methods estimate the object angle by designing different loss functions and bounding box representations. However, these approaches do not account for the effects of coupling between rotation-equivariant and -invariant features on the regression of oriented bounding box (OBB) parameters. We manifest the problem in two aspects: 1) the coupling of parameters with different attributes. Current OOD methods overlook the inherent differences among features representing an object’s location, scale, and angle, making it challenging to accurately predict the OBB parameters with different attributes and 2) the coupling of object and background features. Conventional OOD methods apply convolution kernels uniformly across objects and background regions, leading to feature entanglement and degradation in detection performance. To address the above issues, we propose a deep feature decoupling network (DFDNet) to decouple the extracted features. Specifically, we propose parameter regression decoupling (PRD) to separate feature maps based on their attributes, subsequently assigning them to distinct branches for the OBB parameter regression. This approach ensures the decoupling of features related to an object’s location, shape, angle, and category. Additionally, to enhance the ability of OOD networks to differentiate between object and background features, we designed the mask reinforcement module (MRM), which is integrated into the PRD branches. The MRM dynamically adjusts the weights of object features, suppressing background interference and enhancing the distinction between object and background features. Extensive experiments conducted on the DOTA, HRSC2016, and UCAS-AOD datasets validate the effectiveness of DFDNet, demonstrating that it achieves state-of-the-art performance.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10964376/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Objects in remote sensing images exhibit diverse orientations. Current oriented object detection (OOD) methods estimate the object angle by designing different loss functions and bounding box representations. However, these approaches do not account for the effects of coupling between rotation-equivariant and -invariant features on the regression of oriented bounding box (OBB) parameters. We manifest the problem in two aspects: 1) the coupling of parameters with different attributes. Current OOD methods overlook the inherent differences among features representing an object’s location, scale, and angle, making it challenging to accurately predict the OBB parameters with different attributes and 2) the coupling of object and background features. Conventional OOD methods apply convolution kernels uniformly across objects and background regions, leading to feature entanglement and degradation in detection performance. To address the above issues, we propose a deep feature decoupling network (DFDNet) to decouple the extracted features. Specifically, we propose parameter regression decoupling (PRD) to separate feature maps based on their attributes, subsequently assigning them to distinct branches for the OBB parameter regression. This approach ensures the decoupling of features related to an object’s location, shape, angle, and category. Additionally, to enhance the ability of OOD networks to differentiate between object and background features, we designed the mask reinforcement module (MRM), which is integrated into the PRD branches. The MRM dynamically adjusts the weights of object features, suppressing background interference and enhancing the distinction between object and background features. Extensive experiments conducted on the DOTA, HRSC2016, and UCAS-AOD datasets validate the effectiveness of DFDNet, demonstrating that it achieves state-of-the-art performance.