Self-Supervised Continuous Meta-Learning for Few-shot Image Classification

Suyan He, Yingjian Li
{"title":"Self-Supervised Continuous Meta-Learning for Few-shot Image Classification","authors":"Suyan He, Yingjian Li","doi":"10.1109/acait53529.2021.9731334","DOIUrl":null,"url":null,"abstract":"Few-shot classification aims to adapt the knowledge learned from base classes with sufficient data to new classes with limited data, where meta-learning methods are usually leveraged for this challenging task. However, most existing algorithms suffer from insufficient representation and testing bias issues, accordingly failing to exploit useful semantic information while being prone to cause the gap of classification accuracy between training classes and testing classes. To this end, we propose the Self-Supervised Continuous Meta-Learning (SS-CML) framework to simultaneously handle the mentioned problems, which consists of two key modules. i.e., Self-Supervised Embedding network and Self-Supervised GNN. Specifically, Self-Supervised Embedding network can extract informative semantic information from training images so that the learned prototype are more representative for the classification task. Moreover, Self-Supervised GNN learn reactions between nodes without true labels, which can improve the reliability of knowledge prior to classify images of new classes, thereby reducing the excessive dependence of training classes and alleviating the testing bias issue. Furthermore, these two modules are jointly leveraged in our SS-CML to generalize the prior knowledge to novel classes. Extensive experimental results on MiniImageNet and TieredImageNet show up the effectiveness of both self-supervised branches which boost classification performance.","PeriodicalId":173633,"journal":{"name":"2021 5th Asian Conference on Artificial Intelligence Technology (ACAIT)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 5th Asian Conference on Artificial Intelligence Technology (ACAIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/acait53529.2021.9731334","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Few-shot classification aims to adapt the knowledge learned from base classes with sufficient data to new classes with limited data, where meta-learning methods are usually leveraged for this challenging task. However, most existing algorithms suffer from insufficient representation and testing bias issues, accordingly failing to exploit useful semantic information while being prone to cause the gap of classification accuracy between training classes and testing classes. To this end, we propose the Self-Supervised Continuous Meta-Learning (SS-CML) framework to simultaneously handle the mentioned problems, which consists of two key modules. i.e., Self-Supervised Embedding network and Self-Supervised GNN. Specifically, Self-Supervised Embedding network can extract informative semantic information from training images so that the learned prototype are more representative for the classification task. Moreover, Self-Supervised GNN learn reactions between nodes without true labels, which can improve the reliability of knowledge prior to classify images of new classes, thereby reducing the excessive dependence of training classes and alleviating the testing bias issue. Furthermore, these two modules are jointly leveraged in our SS-CML to generalize the prior knowledge to novel classes. Extensive experimental results on MiniImageNet and TieredImageNet show up the effectiveness of both self-supervised branches which boost classification performance.
基于自监督连续元学习的少镜头图像分类
Few-shot分类旨在将从具有足够数据的基类中学习到的知识适应于具有有限数据的新类,其中元学习方法通常用于这项具有挑战性的任务。然而,现有的大多数算法存在表征不足和测试偏差问题,无法挖掘有用的语义信息,容易造成训练类和测试类之间的分类精度差距。为此,我们提出了自监督连续元学习(SS-CML)框架来同时处理上述问题,该框架由两个关键模块组成。即自监督嵌入网络和自监督GNN。具体来说,自监督嵌入网络可以从训练图像中提取丰富的语义信息,使学习到的原型对分类任务更具代表性。此外,Self-Supervised GNN在没有真标签的情况下学习节点之间的反应,可以在对新类别的图像进行分类之前提高知识的可靠性,从而减少对训练类别的过度依赖,缓解测试偏差问题。此外,这两个模块在我们的SS-CML中被联合利用,将先验知识泛化到新的类。在MiniImageNet和TieredImageNet上的大量实验结果表明,自监督分支都能提高分类性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信