{"title":"CLDTracker:用于视觉跟踪的综合语言描述","authors":"Mohamad Alansari, Sajid Javed, Iyyakutti Iyappan Ganapathi, Sara Alansari, Muzammal Naseer","doi":"10.1016/j.inffus.2025.103374","DOIUrl":null,"url":null,"abstract":"<div><div>Visual Object Tracking (VOT) remains a fundamental yet challenging task in computer vision due to dynamic appearance changes, occlusions, and background clutter. Traditional trackers, relying primarily on visual cues, often struggle in such complex scenarios. Recent advancements in Vision–Language Models (VLMs) have shown promise in semantic understanding for tasks like open-vocabulary detection and image captioning, suggesting their potential for VOT. However, the direct application of VLMs to VOT is hindered by critical limitations: the absence of a rich and comprehensive textual representation that semantically captures the target object’s nuances, limiting the effective use of language information; inefficient fusion mechanisms that fail to optimally integrate visual and textual features, preventing a holistic understanding of the target; and a lack of temporal modeling of the target’s evolving appearance in the language domain, leading to a disconnect between the initial description and the object’s subsequent visual changes. To bridge these gaps and unlock the full potential of VLMs for VOT, we propose CLDTracker, a novel <strong>C</strong>omprehensive <strong>L</strong>anguage <strong>D</strong>escription framework for robust visual <strong>Track</strong>ing. Our tracker introduces a dual-branch architecture consisting of a textual and a visual branch. In the textual branch, we construct a rich bag of textual descriptions derived by harnessing the powerful VLMs such as CLIP and GPT-4V, enriched with semantic and contextual cues to address the lack of rich textual representation. We further propose a <strong>T</strong>emporal <strong>T</strong>ext <strong>F</strong>eature <strong>U</strong>pdate <strong>M</strong>echanism (TTFUM) to adapt these descriptions across frames, capturing evolving target appearances and tackling the absence of temporal modeling. In parallel, the visual branch extracts features using a Vision Transformer (ViT), and an attention-based cross-modal correlation head fuses both modalities for accurate target prediction, addressing the inefficient fusion mechanisms. Experiments on six standard VOT benchmarks demonstrate that CLDTracker achieves State-of-The-Art (SOTA) performance, validating the effectiveness of leveraging robust and temporally-adaptive vision–language representations for tracking. Code and models are publicly available at: <span><span>https://github.com/HamadYA/CLDTracker</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"124 ","pages":"Article 103374"},"PeriodicalIF":14.7000,"publicationDate":"2025-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CLDTracker: A Comprehensive Language Description for visual Tracking\",\"authors\":\"Mohamad Alansari, Sajid Javed, Iyyakutti Iyappan Ganapathi, Sara Alansari, Muzammal Naseer\",\"doi\":\"10.1016/j.inffus.2025.103374\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Visual Object Tracking (VOT) remains a fundamental yet challenging task in computer vision due to dynamic appearance changes, occlusions, and background clutter. Traditional trackers, relying primarily on visual cues, often struggle in such complex scenarios. Recent advancements in Vision–Language Models (VLMs) have shown promise in semantic understanding for tasks like open-vocabulary detection and image captioning, suggesting their potential for VOT. However, the direct application of VLMs to VOT is hindered by critical limitations: the absence of a rich and comprehensive textual representation that semantically captures the target object’s nuances, limiting the effective use of language information; inefficient fusion mechanisms that fail to optimally integrate visual and textual features, preventing a holistic understanding of the target; and a lack of temporal modeling of the target’s evolving appearance in the language domain, leading to a disconnect between the initial description and the object’s subsequent visual changes. To bridge these gaps and unlock the full potential of VLMs for VOT, we propose CLDTracker, a novel <strong>C</strong>omprehensive <strong>L</strong>anguage <strong>D</strong>escription framework for robust visual <strong>Track</strong>ing. Our tracker introduces a dual-branch architecture consisting of a textual and a visual branch. In the textual branch, we construct a rich bag of textual descriptions derived by harnessing the powerful VLMs such as CLIP and GPT-4V, enriched with semantic and contextual cues to address the lack of rich textual representation. We further propose a <strong>T</strong>emporal <strong>T</strong>ext <strong>F</strong>eature <strong>U</strong>pdate <strong>M</strong>echanism (TTFUM) to adapt these descriptions across frames, capturing evolving target appearances and tackling the absence of temporal modeling. In parallel, the visual branch extracts features using a Vision Transformer (ViT), and an attention-based cross-modal correlation head fuses both modalities for accurate target prediction, addressing the inefficient fusion mechanisms. Experiments on six standard VOT benchmarks demonstrate that CLDTracker achieves State-of-The-Art (SOTA) performance, validating the effectiveness of leveraging robust and temporally-adaptive vision–language representations for tracking. Code and models are publicly available at: <span><span>https://github.com/HamadYA/CLDTracker</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"124 \",\"pages\":\"Article 103374\"},\"PeriodicalIF\":14.7000,\"publicationDate\":\"2025-06-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1566253525004476\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525004476","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
CLDTracker: A Comprehensive Language Description for visual Tracking
Visual Object Tracking (VOT) remains a fundamental yet challenging task in computer vision due to dynamic appearance changes, occlusions, and background clutter. Traditional trackers, relying primarily on visual cues, often struggle in such complex scenarios. Recent advancements in Vision–Language Models (VLMs) have shown promise in semantic understanding for tasks like open-vocabulary detection and image captioning, suggesting their potential for VOT. However, the direct application of VLMs to VOT is hindered by critical limitations: the absence of a rich and comprehensive textual representation that semantically captures the target object’s nuances, limiting the effective use of language information; inefficient fusion mechanisms that fail to optimally integrate visual and textual features, preventing a holistic understanding of the target; and a lack of temporal modeling of the target’s evolving appearance in the language domain, leading to a disconnect between the initial description and the object’s subsequent visual changes. To bridge these gaps and unlock the full potential of VLMs for VOT, we propose CLDTracker, a novel Comprehensive Language Description framework for robust visual Tracking. Our tracker introduces a dual-branch architecture consisting of a textual and a visual branch. In the textual branch, we construct a rich bag of textual descriptions derived by harnessing the powerful VLMs such as CLIP and GPT-4V, enriched with semantic and contextual cues to address the lack of rich textual representation. We further propose a Temporal Text Feature Update Mechanism (TTFUM) to adapt these descriptions across frames, capturing evolving target appearances and tackling the absence of temporal modeling. In parallel, the visual branch extracts features using a Vision Transformer (ViT), and an attention-based cross-modal correlation head fuses both modalities for accurate target prediction, addressing the inefficient fusion mechanisms. Experiments on six standard VOT benchmarks demonstrate that CLDTracker achieves State-of-The-Art (SOTA) performance, validating the effectiveness of leveraging robust and temporally-adaptive vision–language representations for tracking. Code and models are publicly available at: https://github.com/HamadYA/CLDTracker.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.