基于跨模态制导聚类的零射击目标检测

IF 4.2 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Deqiang Cheng , Xingchen Xu , Haoxiang Zhang , Tianshu Song , He Jiang , Qiqi Kou
{"title":"基于跨模态制导聚类的零射击目标检测","authors":"Deqiang Cheng ,&nbsp;Xingchen Xu ,&nbsp;Haoxiang Zhang ,&nbsp;Tianshu Song ,&nbsp;He Jiang ,&nbsp;Qiqi Kou","doi":"10.1016/j.imavis.2025.105664","DOIUrl":null,"url":null,"abstract":"<div><div>At present, contrastive learning has been widely used in Zero-Shot Object Detection (ZSD) and proved to be able to reduce inter-class confusion. However, existing ZSD clustering algorithms operate spontaneously, without effective guidance, and may therefore cluster in the wrong places, as they are constrained to a single visual modality. It is difficult to achieve cross-modal alignment, and textual guidance can help achieve ideal visual clustering. In view of the above problems, this paper proposes a novel zero-shot object detection method based on cross-modal guided clustering, which is a new method for ZSD that combines image-to-image contrast with an auxiliary image-to-text contrast during training. Firstly, an instance-level cross-modal contrastive embedding (ICCE) loss is proposed, by which text similarities are used as dynamic weights to guide the modal to focus on the most confusing categories, and ignoring low similarity ones. A cross-level cross-modal contrastive embedding (CCCE) loss based on ICCE is also designed to provide an ideal guided cluster center. Finally, a cross-modal triplet loss (CTL) is introduced to divide anchors into positive and negative anchors to address the problem that negative samples are difficult to cluster effectively. The first two highlight class-level similarities to avoid misclassification in the most confusing categories, while the last focuses on capturing the most challenging cases to ensure it can handle difficult instances effectively. Experimental tests and comparisons are conducted with the current advanced methods on three baseline databases, and the results demonstrate that the proposed method can achieve a better detection effect, especially when the number of training categories is limited.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"162 ","pages":"Article 105664"},"PeriodicalIF":4.2000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Zero-shot object detection based on cross-modal guided clustering\",\"authors\":\"Deqiang Cheng ,&nbsp;Xingchen Xu ,&nbsp;Haoxiang Zhang ,&nbsp;Tianshu Song ,&nbsp;He Jiang ,&nbsp;Qiqi Kou\",\"doi\":\"10.1016/j.imavis.2025.105664\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>At present, contrastive learning has been widely used in Zero-Shot Object Detection (ZSD) and proved to be able to reduce inter-class confusion. However, existing ZSD clustering algorithms operate spontaneously, without effective guidance, and may therefore cluster in the wrong places, as they are constrained to a single visual modality. It is difficult to achieve cross-modal alignment, and textual guidance can help achieve ideal visual clustering. In view of the above problems, this paper proposes a novel zero-shot object detection method based on cross-modal guided clustering, which is a new method for ZSD that combines image-to-image contrast with an auxiliary image-to-text contrast during training. Firstly, an instance-level cross-modal contrastive embedding (ICCE) loss is proposed, by which text similarities are used as dynamic weights to guide the modal to focus on the most confusing categories, and ignoring low similarity ones. A cross-level cross-modal contrastive embedding (CCCE) loss based on ICCE is also designed to provide an ideal guided cluster center. Finally, a cross-modal triplet loss (CTL) is introduced to divide anchors into positive and negative anchors to address the problem that negative samples are difficult to cluster effectively. The first two highlight class-level similarities to avoid misclassification in the most confusing categories, while the last focuses on capturing the most challenging cases to ensure it can handle difficult instances effectively. Experimental tests and comparisons are conducted with the current advanced methods on three baseline databases, and the results demonstrate that the proposed method can achieve a better detection effect, especially when the number of training categories is limited.</div></div>\",\"PeriodicalId\":50374,\"journal\":{\"name\":\"Image and Vision Computing\",\"volume\":\"162 \",\"pages\":\"Article 105664\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2025-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Image and Vision Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0262885625002525\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885625002525","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

目前,对比学习在零射击目标检测(Zero-Shot Object Detection, ZSD)中得到了广泛的应用,并被证明能够减少类间混淆。然而,现有的ZSD聚类算法自发地运行,没有有效的指导,因此可能在错误的地方聚类,因为它们被限制在单一的视觉模态。跨模态对齐很难实现,文本引导可以帮助实现理想的视觉聚类。针对上述问题,本文提出了一种新的基于跨模态引导聚类的零射击目标检测方法,这是一种在训练过程中将图像与图像对比与辅助图像与文本对比相结合的ZSD新方法。首先,提出了实例级跨模态对比嵌入(ICCE)损失,利用文本相似度作为动态权重,引导模态关注最容易混淆的类别,忽略低相似度的类别;为了提供理想的引导簇中心,还设计了基于ICCE的跨层交叉模态对比嵌入(CCCE)损失。最后,引入交叉模态三重损失(CTL)将锚点划分为正锚点和负锚点,以解决负样本难以有效聚类的问题。前两个突出类级别的相似性,以避免在最令人困惑的类别中进行错误分类,而最后一个侧重于捕获最具挑战性的案例,以确保它能够有效地处理困难的实例。在三个基线数据库上与现有的先进方法进行了实验测试和比较,结果表明,本文提出的方法在训练类别数量有限的情况下具有较好的检测效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Zero-shot object detection based on cross-modal guided clustering
At present, contrastive learning has been widely used in Zero-Shot Object Detection (ZSD) and proved to be able to reduce inter-class confusion. However, existing ZSD clustering algorithms operate spontaneously, without effective guidance, and may therefore cluster in the wrong places, as they are constrained to a single visual modality. It is difficult to achieve cross-modal alignment, and textual guidance can help achieve ideal visual clustering. In view of the above problems, this paper proposes a novel zero-shot object detection method based on cross-modal guided clustering, which is a new method for ZSD that combines image-to-image contrast with an auxiliary image-to-text contrast during training. Firstly, an instance-level cross-modal contrastive embedding (ICCE) loss is proposed, by which text similarities are used as dynamic weights to guide the modal to focus on the most confusing categories, and ignoring low similarity ones. A cross-level cross-modal contrastive embedding (CCCE) loss based on ICCE is also designed to provide an ideal guided cluster center. Finally, a cross-modal triplet loss (CTL) is introduced to divide anchors into positive and negative anchors to address the problem that negative samples are difficult to cluster effectively. The first two highlight class-level similarities to avoid misclassification in the most confusing categories, while the last focuses on capturing the most challenging cases to ensure it can handle difficult instances effectively. Experimental tests and comparisons are conducted with the current advanced methods on three baseline databases, and the results demonstrate that the proposed method can achieve a better detection effect, especially when the number of training categories is limited.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Image and Vision Computing
Image and Vision Computing 工程技术-工程:电子与电气
CiteScore
8.50
自引率
8.50%
发文量
143
审稿时长
7.8 months
期刊介绍: Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信