{"title":"TriMatch: Triple Matching for Text-to-Image Person Re-Identification","authors":"Shuanglin Yan;Neng Dong;Shuang Li;Huafeng Li","doi":"10.1109/LSP.2025.3534689","DOIUrl":null,"url":null,"abstract":"Text-to-image person re-identification (TIReID) is a cross-modal retrieval task that aims to retrieve target person images based on a given text description. Existing methods primarily focus on mining the semantic associations across modalities, relying on the matching between heterogeneous features for retrieval. However, due to the inherent heterogeneous gaps between modalities, it is challenging to establish precise semantic associations, particularly in fine-grained correspondences, often leading to incorrect retrieval results. To address this issue, this letter proposes an innovative Triple Matching (TriMatch) framework that integrates cross-modal (image-text) matching and unimodal (image-image, text-text) matching for high-precision person retrieval. The framework introduces a generation task that performs cross-modal (image-to-text and text-to-image) feature generation and intra-modal feature alig achieve unimodal matching. By incorporating the generation task, TriMatch considers not only the semantic correlations between modalities but also the semantic consistency within single modalities, thereby effectively enhancing the accuracy of target person retrieval. Extensive experiments on multiple datasets demonstrate the superiority of TriMatch over existing methods.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"806-810"},"PeriodicalIF":3.2000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10855499/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Text-to-image person re-identification (TIReID) is a cross-modal retrieval task that aims to retrieve target person images based on a given text description. Existing methods primarily focus on mining the semantic associations across modalities, relying on the matching between heterogeneous features for retrieval. However, due to the inherent heterogeneous gaps between modalities, it is challenging to establish precise semantic associations, particularly in fine-grained correspondences, often leading to incorrect retrieval results. To address this issue, this letter proposes an innovative Triple Matching (TriMatch) framework that integrates cross-modal (image-text) matching and unimodal (image-image, text-text) matching for high-precision person retrieval. The framework introduces a generation task that performs cross-modal (image-to-text and text-to-image) feature generation and intra-modal feature alig achieve unimodal matching. By incorporating the generation task, TriMatch considers not only the semantic correlations between modalities but also the semantic consistency within single modalities, thereby effectively enhancing the accuracy of target person retrieval. Extensive experiments on multiple datasets demonstrate the superiority of TriMatch over existing methods.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.