Understanding the Dimensional Need of Noncontrastive Learning.

IF 9.4 1区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Zhexiao Cao, Lei Huang, Tian Wang, Yinquan Wang, Jingang Shi, Aichun Zhu, Tianyun Shi, Hichem Snoussi
{"title":"Understanding the Dimensional Need of Noncontrastive Learning.","authors":"Zhexiao Cao, Lei Huang, Tian Wang, Yinquan Wang, Jingang Shi, Aichun Zhu, Tianyun Shi, Hichem Snoussi","doi":"10.1109/TCYB.2025.3577745","DOIUrl":null,"url":null,"abstract":"<p><p>Noncontrastive self-supervised learning methods offer an effective alternative to contrastive approaches by avoiding the need for negative samples to avoid representation collapse. Noncontrastive learning methods explicitly or implicitly optimize the representation space, yet they often require large representation dimensions, leading to dimensional inefficiency. To provide negative samples, contrastive learning methods often require large batch sizes, thus regarded as sample inefficient, while noncontrastive learning methods require large representation dimensions, thus regarded as dimension inefficient. Although we have some understanding of the noncontrastive learning method, theoretical analysis of such phenomenon still remains largely unexplored. We present a theoretical analysis of the dimensional need for noncontrastive learning. We investigate the transfer between upstream representation learning and downstream tasks' performance, demonstrating how noncontrastive methods implicitly increase interclass distances within the representation space and how the distance affects the model performance of evaluation performance. We prove that the performance of noncontrastive methods is affected by the output dimension and the number of latent classes, and illustrate why performance degrades significantly when the output dimension is substantially smaller than the number of latent classes. We demonstrate our findings through experiments on image classification experiments, and enrich the verification in audio, graph and text modalities. We also perform empirical evaluation for image models on extensive detection and segmentation tasks beyond classification that show satisfactory correspondence to our theorem.</p>","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"PP ","pages":""},"PeriodicalIF":9.4000,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cybernetics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TCYB.2025.3577745","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Noncontrastive self-supervised learning methods offer an effective alternative to contrastive approaches by avoiding the need for negative samples to avoid representation collapse. Noncontrastive learning methods explicitly or implicitly optimize the representation space, yet they often require large representation dimensions, leading to dimensional inefficiency. To provide negative samples, contrastive learning methods often require large batch sizes, thus regarded as sample inefficient, while noncontrastive learning methods require large representation dimensions, thus regarded as dimension inefficient. Although we have some understanding of the noncontrastive learning method, theoretical analysis of such phenomenon still remains largely unexplored. We present a theoretical analysis of the dimensional need for noncontrastive learning. We investigate the transfer between upstream representation learning and downstream tasks' performance, demonstrating how noncontrastive methods implicitly increase interclass distances within the representation space and how the distance affects the model performance of evaluation performance. We prove that the performance of noncontrastive methods is affected by the output dimension and the number of latent classes, and illustrate why performance degrades significantly when the output dimension is substantially smaller than the number of latent classes. We demonstrate our findings through experiments on image classification experiments, and enrich the verification in audio, graph and text modalities. We also perform empirical evaluation for image models on extensive detection and segmentation tasks beyond classification that show satisfactory correspondence to our theorem.

理解非对比学习的维度需求。
非对比自监督学习方法通过避免需要负样本来避免表征崩溃,为对比方法提供了一种有效的替代方法。非对比学习方法显式或隐式地优化了表示空间,但它们通常需要较大的表示维度,导致维度效率低下。为了提供负样本,对比学习方法通常需要较大的批量,因此被认为是样本低效的,而非对比学习方法需要较大的表示维数,因此被认为是维度低效的。虽然我们对非对比学习方法有了一定的了解,但对这一现象的理论分析仍然是一个很大的空白。我们对非对比学习的维度需求进行了理论分析。我们研究了上游表征学习和下游任务性能之间的转移,展示了非对比方法如何隐式地增加表征空间内的类间距离,以及距离如何影响评估性能的模型性能。我们证明了非对比方法的性能受到输出维数和潜在类数的影响,并说明了当输出维数远远小于潜在类数时,性能会显著下降的原因。我们通过图像分类实验验证了我们的发现,并在音频、图形和文本模式上进行了丰富的验证。我们还对图像模型在分类之外的广泛检测和分割任务上进行了经验评估,这些模型显示出与我们的定理满意的对应关系。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Cybernetics
IEEE Transactions on Cybernetics COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
25.40
自引率
11.00%
发文量
1869
期刊介绍: The scope of the IEEE Transactions on Cybernetics includes computational approaches to the field of cybernetics. Specifically, the transactions welcomes papers on communication and control across machines or machine, human, and organizations. The scope includes such areas as computational intelligence, computer vision, neural networks, genetic algorithms, machine learning, fuzzy systems, cognitive systems, decision making, and robotics, to the extent that they contribute to the theme of cybernetics or demonstrate an application of cybernetics principles.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信