Feature Enhancement Network for Object Detection in Optical Remote Sensing Images

遥感学报 Pub Date : 2021-07-08 DOI:10.34133/2021/9805389
Gong Cheng, Chunbo Lang, Maoxiong Wu, Xingxing Xie, Xiwen Yao, Junwei Han
{"title":"Feature Enhancement Network for Object Detection in Optical Remote Sensing Images","authors":"Gong Cheng, Chunbo Lang, Maoxiong Wu, Xingxing Xie, Xiwen Yao, Junwei Han","doi":"10.34133/2021/9805389","DOIUrl":null,"url":null,"abstract":"Automatic and robust object detection in remote sensing images is of vital significance in real-world applications such as land resource management and disaster rescue. However, poor performance arises when the state-of-the-art natural image detection algorithms are directly applied to remote sensing images, which largely results from the variations in object scale, aspect ratio, indistinguishable object appearances, and complex background scenario. In this paper, we propose a novel Feature Enhancement Network (FENet) for object detection in optical remote sensing images, which consists of a Dual Attention Feature Enhancement (DAFE) module and a Context Feature Enhancement (CFE) module. Specifically, the DAFE module is introduced to highlight the network to focus on the distinctive features of the objects of interest and suppress useless ones by jointly recalibrating the spatial and channel feature responses. The CFE module is designed to capture global context cues and selectively strengthen class-aware features by leveraging image-level contextual information that indicates the presence or absence of the object classes. To this end, we employ a context encoding loss to regularize the model training which promotes the object detector to understand the scene better and narrows the probable object categories in prediction. We achieve our proposed FENet by unifying DAFE and CFE into the framework of Faster R-CNN. In the experiments, we evaluate our proposed method on two large-scale remote sensing image object detection datasets including DIOR and DOTA and demonstrate its effectiveness compared with the baseline methods.","PeriodicalId":38304,"journal":{"name":"遥感学报","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"40","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"遥感学报","FirstCategoryId":"1089","ListUrlMain":"https://doi.org/10.34133/2021/9805389","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 40

Abstract

Automatic and robust object detection in remote sensing images is of vital significance in real-world applications such as land resource management and disaster rescue. However, poor performance arises when the state-of-the-art natural image detection algorithms are directly applied to remote sensing images, which largely results from the variations in object scale, aspect ratio, indistinguishable object appearances, and complex background scenario. In this paper, we propose a novel Feature Enhancement Network (FENet) for object detection in optical remote sensing images, which consists of a Dual Attention Feature Enhancement (DAFE) module and a Context Feature Enhancement (CFE) module. Specifically, the DAFE module is introduced to highlight the network to focus on the distinctive features of the objects of interest and suppress useless ones by jointly recalibrating the spatial and channel feature responses. The CFE module is designed to capture global context cues and selectively strengthen class-aware features by leveraging image-level contextual information that indicates the presence or absence of the object classes. To this end, we employ a context encoding loss to regularize the model training which promotes the object detector to understand the scene better and narrows the probable object categories in prediction. We achieve our proposed FENet by unifying DAFE and CFE into the framework of Faster R-CNN. In the experiments, we evaluate our proposed method on two large-scale remote sensing image object detection datasets including DIOR and DOTA and demonstrate its effectiveness compared with the baseline methods.
用于光学遥感图像目标检测的特征增强网络
遥感图像中的自动和稳健目标检测在土地资源管理和灾害救援等现实应用中具有重要意义。然而,当最先进的自然图像检测算法直接应用于遥感图像时,性能较差,这在很大程度上是由于物体尺度、纵横比、难以区分的物体外观和复杂背景场景的变化造成的。本文提出了一种新的用于光学遥感图像目标检测的特征增强网络(FENet),该网络由双注意特征增强(DAFE)模块和上下文特征增强(CFE)模块组成。具体而言,引入DAFE模块来突出网络,以关注感兴趣对象的独特特征,并通过联合重新校准空间和通道特征响应来抑制无用的特征。CFE模块被设计为捕获全局上下文线索,并通过利用指示对象类的存在或不存在的图像级上下文信息来选择性地增强类感知特征。为此,我们使用上下文编码损失来正则化模型训练,这促进了对象检测器更好地理解场景,并在预测中缩小了可能的对象类别。我们通过将DAFE和CFE统一到Faster R-CNN的框架中来实现我们提出的FENet。在实验中,我们在包括DIOR和DOTA在内的两个大规模遥感图像目标检测数据集上评估了我们提出的方法,并证明了与基线方法相比的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
遥感学报
遥感学报 Social Sciences-Geography, Planning and Development
CiteScore
3.60
自引率
0.00%
发文量
3200
期刊介绍:
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信