TAM-TR:用于无人机图像目标检测的文本引导注意力多模态转换器

IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL
Jianhao Xu , Xiangtao Fan , Hongdeng Jian , Chen Xu , Weijia Bei , Qifeng Ge , Teng Zhao , Ruijie Han
{"title":"TAM-TR:用于无人机图像目标检测的文本引导注意力多模态转换器","authors":"Jianhao Xu ,&nbsp;Xiangtao Fan ,&nbsp;Hongdeng Jian ,&nbsp;Chen Xu ,&nbsp;Weijia Bei ,&nbsp;Qifeng Ge ,&nbsp;Teng Zhao ,&nbsp;Ruijie Han","doi":"10.1016/j.isprsjprs.2025.04.027","DOIUrl":null,"url":null,"abstract":"<div><div>Object detection in unmanned aerial vehicles (UAV) imagery is crucial in many fields, such as maritime search and rescue, remote sensing mapping, urban management and agricultural monitoring. The diverse perspectives and altitudes of UAV images often result in significant variations in the appearance and dimensions of objects, and occlusions are found more frequently than in general scenes. The unique bird’s-eye view of drones makes it more difficult for existing object detection models to distinguish between similar objects. A text-guided attention multi-modal transformer network named TAM-TR is proposed to address the above challenges. A Bidirectional Text–Image Attention Path Aggregation Network (BTA-PAN) is proposed in TAM-TR. This network imitates the architecture of the classic algorithm Scale-Invariant Feature Transform (SIFT) and shows better scale adaptability. A novel Multi-modal encoder–decoder head (MEH) was proposed, which can simultaneously consider all input sequence positions to avoid the disappearance of features of occluded objects. An additional text-guided attention branch, combined with a large text model, was proposed to improve the TAM-TR’s classification accuracy. Additionally, a Rotation-invariant IOU (RIOU) loss function was proposed to eliminate the previous loss function’s rotational instability. The experiment demonstrated that the TAM-TR outperformed the baseline by 9.5% and achieves 39.7% mean Averaged Precision (mAP) on the Visdrone dataset. The code will be available at <span><span>https://github.com/Xjh-UCAS/TAM-TR</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"227 ","pages":"Pages 170-184"},"PeriodicalIF":10.6000,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TAM-TR: Text-guided attention multi-modal transformer for object detection in UAV images\",\"authors\":\"Jianhao Xu ,&nbsp;Xiangtao Fan ,&nbsp;Hongdeng Jian ,&nbsp;Chen Xu ,&nbsp;Weijia Bei ,&nbsp;Qifeng Ge ,&nbsp;Teng Zhao ,&nbsp;Ruijie Han\",\"doi\":\"10.1016/j.isprsjprs.2025.04.027\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Object detection in unmanned aerial vehicles (UAV) imagery is crucial in many fields, such as maritime search and rescue, remote sensing mapping, urban management and agricultural monitoring. The diverse perspectives and altitudes of UAV images often result in significant variations in the appearance and dimensions of objects, and occlusions are found more frequently than in general scenes. The unique bird’s-eye view of drones makes it more difficult for existing object detection models to distinguish between similar objects. A text-guided attention multi-modal transformer network named TAM-TR is proposed to address the above challenges. A Bidirectional Text–Image Attention Path Aggregation Network (BTA-PAN) is proposed in TAM-TR. This network imitates the architecture of the classic algorithm Scale-Invariant Feature Transform (SIFT) and shows better scale adaptability. A novel Multi-modal encoder–decoder head (MEH) was proposed, which can simultaneously consider all input sequence positions to avoid the disappearance of features of occluded objects. An additional text-guided attention branch, combined with a large text model, was proposed to improve the TAM-TR’s classification accuracy. Additionally, a Rotation-invariant IOU (RIOU) loss function was proposed to eliminate the previous loss function’s rotational instability. The experiment demonstrated that the TAM-TR outperformed the baseline by 9.5% and achieves 39.7% mean Averaged Precision (mAP) on the Visdrone dataset. The code will be available at <span><span>https://github.com/Xjh-UCAS/TAM-TR</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50269,\"journal\":{\"name\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"volume\":\"227 \",\"pages\":\"Pages 170-184\"},\"PeriodicalIF\":10.6000,\"publicationDate\":\"2025-06-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ISPRS Journal of Photogrammetry and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0924271625001637\",\"RegionNum\":1,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"GEOGRAPHY, PHYSICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271625001637","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0

摘要

无人机(UAV)图像中的目标检测在海上搜救、遥感测绘、城市管理和农业监测等许多领域至关重要。无人机图像的不同视角和高度经常导致物体外观和尺寸的显著变化,并且比一般场景更频繁地发现遮挡。无人机独特的鸟瞰图使得现有的目标检测模型更难区分相似的物体。针对上述问题,提出了一种文本引导注意力多模态变压器网络TAM-TR。在TAM-TR中提出了一种双向文本-图像注意路径聚合网络(BTA-PAN)。该网络模仿经典算法尺度不变特征变换(SIFT)的结构,具有更好的尺度适应性。提出了一种新的多模态编码器头部(MEH),它可以同时考虑所有输入序列的位置,以避免遮挡目标特征的消失。为了提高TAM-TR的分类精度,提出了一个额外的文本引导注意分支,并结合大文本模型。此外,提出了一种旋转不变的IOU (RIOU)损失函数,以消除先前损失函数的旋转不稳定性。实验表明,TAM-TR在Visdrone数据集上的平均精度(mAP)达到39.7%,比基线高9.5%。代码可在https://github.com/Xjh-UCAS/TAM-TR上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
TAM-TR: Text-guided attention multi-modal transformer for object detection in UAV images
Object detection in unmanned aerial vehicles (UAV) imagery is crucial in many fields, such as maritime search and rescue, remote sensing mapping, urban management and agricultural monitoring. The diverse perspectives and altitudes of UAV images often result in significant variations in the appearance and dimensions of objects, and occlusions are found more frequently than in general scenes. The unique bird’s-eye view of drones makes it more difficult for existing object detection models to distinguish between similar objects. A text-guided attention multi-modal transformer network named TAM-TR is proposed to address the above challenges. A Bidirectional Text–Image Attention Path Aggregation Network (BTA-PAN) is proposed in TAM-TR. This network imitates the architecture of the classic algorithm Scale-Invariant Feature Transform (SIFT) and shows better scale adaptability. A novel Multi-modal encoder–decoder head (MEH) was proposed, which can simultaneously consider all input sequence positions to avoid the disappearance of features of occluded objects. An additional text-guided attention branch, combined with a large text model, was proposed to improve the TAM-TR’s classification accuracy. Additionally, a Rotation-invariant IOU (RIOU) loss function was proposed to eliminate the previous loss function’s rotational instability. The experiment demonstrated that the TAM-TR outperformed the baseline by 9.5% and achieves 39.7% mean Averaged Precision (mAP) on the Visdrone dataset. The code will be available at https://github.com/Xjh-UCAS/TAM-TR.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ISPRS Journal of Photogrammetry and Remote Sensing
ISPRS Journal of Photogrammetry and Remote Sensing 工程技术-成像科学与照相技术
CiteScore
21.00
自引率
6.30%
发文量
273
审稿时长
40 days
期刊介绍: The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive. P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields. In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信