Jiale Du , Yang Liu , Xinbo Gao , Jungong Han , Lei Zhang
{"title":"Zero-Shot Sketch-Based Image Retrieval with teacher-guided and student-centered cross-modal bidirectional knowledge distillation","authors":"Jiale Du , Yang Liu , Xinbo Gao , Jungong Han , Lei Zhang","doi":"10.1016/j.patcog.2025.111529","DOIUrl":null,"url":null,"abstract":"<div><div>In the context of zero-shot learning, the task of using unseen-class sketches as queries to retrieve real images is referred to as Zero-Shot Sketch-Based Image Retrieval (ZS-SBIR). The ZS-SBIR task aims to generalize knowledge learned from known categories to unknown ones. Current research primarily relies on fine-tuning networks via loss functions or unidirectionally extracting knowledge from fixed-parameter teacher models for training student models. However, unidirectional knowledge extraction from teacher models often lacks mutual learning and knowledge alignment between the teacher and student models, while fine-tuning networks via loss functions struggles to handle both photo and sketch modalities simultaneously. Therefore, we designed a modal perception and distribution alignment scheme based on gradient weighting to explore both photo and sketch features bidirectionally and deeply investigate the relationships between different modalities. Building on this, we propose a teacher-guided and student-centered cross-modal bidirectional knowledge distillation framework. During training, the student and teacher models mutually learn discriminative information based on the relationships between different modalities and synchronize their parameters under the guidance of the teacher model, thus effectively achieving cross-modal alignment. Extensive experiments conducted on the TU-Berlin Ext, Sketchy Ext and QuickDraw Ext datasets demonstrate that our method significantly enhances retrieval performance.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"164 ","pages":"Article 111529"},"PeriodicalIF":7.5000,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S003132032500189X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In the context of zero-shot learning, the task of using unseen-class sketches as queries to retrieve real images is referred to as Zero-Shot Sketch-Based Image Retrieval (ZS-SBIR). The ZS-SBIR task aims to generalize knowledge learned from known categories to unknown ones. Current research primarily relies on fine-tuning networks via loss functions or unidirectionally extracting knowledge from fixed-parameter teacher models for training student models. However, unidirectional knowledge extraction from teacher models often lacks mutual learning and knowledge alignment between the teacher and student models, while fine-tuning networks via loss functions struggles to handle both photo and sketch modalities simultaneously. Therefore, we designed a modal perception and distribution alignment scheme based on gradient weighting to explore both photo and sketch features bidirectionally and deeply investigate the relationships between different modalities. Building on this, we propose a teacher-guided and student-centered cross-modal bidirectional knowledge distillation framework. During training, the student and teacher models mutually learn discriminative information based on the relationships between different modalities and synchronize their parameters under the guidance of the teacher model, thus effectively achieving cross-modal alignment. Extensive experiments conducted on the TU-Berlin Ext, Sketchy Ext and QuickDraw Ext datasets demonstrate that our method significantly enhances retrieval performance.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.