Hyperbolic Insights With Knowledge Distillation for Cross-Domain Few-Shot Learning

Xi Yang;Dechen Kong;Nannan Wang;Xinbo Gao
{"title":"Hyperbolic Insights With Knowledge Distillation for Cross-Domain Few-Shot Learning","authors":"Xi Yang;Dechen Kong;Nannan Wang;Xinbo Gao","doi":"10.1109/TIP.2025.3551647","DOIUrl":null,"url":null,"abstract":"Cross-domain few-shot learning aims to achieve swift generalization between a source domain and a target domain using a limited number of images. Current research predominantly relies on generalized feature embeddings, employing metric classifiers in Euclidean space for classification. However, due to existing disparities among different data domains, attaining generalized features in the embedding becomes challenging. Additionally, the rise in data domains leads to high-dimensional Euclidean spaces. To address the above problems, we introduce a cross-domain few-shot learning method named Hyperbolic Insights with Knowledge Distillation (HIKD). By integrating knowledge distillation, it enhances the model’s generalization performance, thereby significantly improving task performance. Hyperbolic space, in comparison to Euclidean space, offers a larger capacity and supports the learning of hierarchical structures among images, which can aid generalized learning across different data domains. So we map the Euclidean space features to the hyperbolic space via hyperbolic embedding and utilize hyperbolic fitting distillation method in the meta-training phase to obtain multi-domain unified generalization representation. In the meta-testing phase, accounting for biases between the source and target domains, we present a hyperbolic adaptive module to adjust embedded features and eliminate inter-domain gap. Experiments on the Meta-Dataset demonstrate that HIKD outperforms state-of-the-arts methods with the average accuracy of 80.6%.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1921-1933"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10938027/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Cross-domain few-shot learning aims to achieve swift generalization between a source domain and a target domain using a limited number of images. Current research predominantly relies on generalized feature embeddings, employing metric classifiers in Euclidean space for classification. However, due to existing disparities among different data domains, attaining generalized features in the embedding becomes challenging. Additionally, the rise in data domains leads to high-dimensional Euclidean spaces. To address the above problems, we introduce a cross-domain few-shot learning method named Hyperbolic Insights with Knowledge Distillation (HIKD). By integrating knowledge distillation, it enhances the model’s generalization performance, thereby significantly improving task performance. Hyperbolic space, in comparison to Euclidean space, offers a larger capacity and supports the learning of hierarchical structures among images, which can aid generalized learning across different data domains. So we map the Euclidean space features to the hyperbolic space via hyperbolic embedding and utilize hyperbolic fitting distillation method in the meta-training phase to obtain multi-domain unified generalization representation. In the meta-testing phase, accounting for biases between the source and target domains, we present a hyperbolic adaptive module to adjust embedded features and eliminate inter-domain gap. Experiments on the Meta-Dataset demonstrate that HIKD outperforms state-of-the-arts methods with the average accuracy of 80.6%.
面向跨领域小样本学习的知识精馏双曲洞察。
跨域少镜头学习旨在利用有限数量的图像实现源域和目标域之间的快速泛化。目前的研究主要依赖于广义特征嵌入,在欧几里德空间中使用度量分类器进行分类。然而,由于不同数据域之间存在差异,使得在嵌入中获得广义特征变得很有挑战性。此外,数据域的增加导致了高维欧几里得空间。为了解决上述问题,我们引入了一种跨领域的小样本学习方法——双曲洞察与知识蒸馏(HIKD)。通过集成知识蒸馏,增强了模型的泛化性能,从而显著提高了任务性能。与欧几里得空间相比,双曲空间提供了更大的容量,并支持图像之间的层次结构学习,这有助于跨不同数据域的广义学习。因此,我们通过双曲嵌入将欧几里德空间特征映射到双曲空间,并在元训练阶段利用双曲拟合蒸馏方法获得多域统一泛化表示。在元测试阶段,考虑到源域和目标域之间的偏差,我们提出了一个双曲自适应模块来调整嵌入特征并消除域间差距。在meta数据集上的实验表明,HIKD的平均准确率为80.6%,优于目前最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信