Which images can be effectively learnt from self-supervised learning?

IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Michalis Lazarou , Sata Atito , Muhammad Awais , Josef Kittler
{"title":"Which images can be effectively learnt from self-supervised learning?","authors":"Michalis Lazarou ,&nbsp;Sata Atito ,&nbsp;Muhammad Awais ,&nbsp;Josef Kittler","doi":"10.1016/j.patrec.2025.09.003","DOIUrl":null,"url":null,"abstract":"<div><div>Self-supervised learning has shown unprecedented success for learning expressive representations that can be used effectively to solve downstream tasks. However, while the impressive results of self-supervised learning are undeniable there is still a certain mystery regarding how self-supervised learning models learn, what features are they learning and most importantly which examples are hard to learn. Contrastive learning is one of the prominent lines of research in self-supervised learning, where a subcategory of methods relies on knowledge-distillation between a student network and a teacher network which is an exponentially moving average of the student, initially proposed by the seminal work of DINO. In this work we investigate models trained using this family of self-supervised methods and reveal certain properties about them. Specifically, we propose a novel perspective on understanding which examples and which classes are difficult to be learnt effectively during training through the lens of information theory.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"198 ","pages":"Pages 8-13"},"PeriodicalIF":3.3000,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865525003137","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Self-supervised learning has shown unprecedented success for learning expressive representations that can be used effectively to solve downstream tasks. However, while the impressive results of self-supervised learning are undeniable there is still a certain mystery regarding how self-supervised learning models learn, what features are they learning and most importantly which examples are hard to learn. Contrastive learning is one of the prominent lines of research in self-supervised learning, where a subcategory of methods relies on knowledge-distillation between a student network and a teacher network which is an exponentially moving average of the student, initially proposed by the seminal work of DINO. In this work we investigate models trained using this family of self-supervised methods and reveal certain properties about them. Specifically, we propose a novel perspective on understanding which examples and which classes are difficult to be learnt effectively during training through the lens of information theory.
哪些图像可以从自监督学习中有效学习?
自我监督学习在学习表达表征方面取得了前所未有的成功,这些表达表征可以有效地用于解决下游任务。然而,尽管自监督学习的令人印象深刻的结果是不可否认的,但关于自监督学习模型如何学习,它们学习什么特征,最重要的是哪些例子很难学习,仍然存在一定的谜团。对比学习是自我监督学习的重要研究方向之一,其中的一子类方法依赖于学生网络和教师网络之间的知识蒸馏,这是学生的指数移动平均值,最初由DINO的开创性工作提出。在这项工作中,我们研究了使用这一系列自监督方法训练的模型,并揭示了它们的某些特性。具体来说,我们提出了一种新的视角,通过信息论的视角来理解哪些例子和哪些类在训练中难以有效学习。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Pattern Recognition Letters
Pattern Recognition Letters 工程技术-计算机:人工智能
CiteScore
12.40
自引率
5.90%
发文量
287
审稿时长
9.1 months
期刊介绍: Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition. Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信