具有再平衡对比损失的长尾学习

IF 6.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Charika De Alvis, Dishanika Denipitiyage, Suranga Seneviratne
{"title":"具有再平衡对比损失的长尾学习","authors":"Charika De Alvis,&nbsp;Dishanika Denipitiyage,&nbsp;Suranga Seneviratne","doi":"10.1016/j.neucom.2025.131601","DOIUrl":null,"url":null,"abstract":"<div><div>Integrating supervised contrastive loss to cross entropy-based classification has recently been proposed as a solution to address the long-tail learning problem. However, when the class imbalance ratio is high, it requires adjusting the supervised contrastive loss to support the tail classes, as the conventional contrastive learning is biased towards head classes by default. To this end, we present Rebalanced Contrastive Learning (RCL), an efficient means to increase the long-tail classification accuracy by addressing three main aspects: 1. Feature space balancedness – Equal division of the feature space among all the classes 2. Intra-Class compactness – Reducing the distance between same-class embeddings 3. Regularization – Enforcing larger margins for tail classes to reduce overfitting. RCL adopts class frequency-based SoftMax loss balancing to supervised contrastive learning loss and exploits scalar multiplied features fed to the contrastive learning loss to enforce compactness. We implement RCL on the Balanced Contrastive Learning (BCL) Framework, which has the SOTA performance. Our experiments on three benchmark datasets CIFAR10-LT,CIFAR100-LT and ImageNet-LT demonstrate the richness of the learnt embeddings and increased top-1 balanced accuracy RCL provides to the BCL framework. We further demonstrate that the performance of RCL as a standalone loss also achieves state-of-the-art level accuracy.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"657 ","pages":"Article 131601"},"PeriodicalIF":6.5000,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Long-tail learning with rebalanced contrastive loss\",\"authors\":\"Charika De Alvis,&nbsp;Dishanika Denipitiyage,&nbsp;Suranga Seneviratne\",\"doi\":\"10.1016/j.neucom.2025.131601\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Integrating supervised contrastive loss to cross entropy-based classification has recently been proposed as a solution to address the long-tail learning problem. However, when the class imbalance ratio is high, it requires adjusting the supervised contrastive loss to support the tail classes, as the conventional contrastive learning is biased towards head classes by default. To this end, we present Rebalanced Contrastive Learning (RCL), an efficient means to increase the long-tail classification accuracy by addressing three main aspects: 1. Feature space balancedness – Equal division of the feature space among all the classes 2. Intra-Class compactness – Reducing the distance between same-class embeddings 3. Regularization – Enforcing larger margins for tail classes to reduce overfitting. RCL adopts class frequency-based SoftMax loss balancing to supervised contrastive learning loss and exploits scalar multiplied features fed to the contrastive learning loss to enforce compactness. We implement RCL on the Balanced Contrastive Learning (BCL) Framework, which has the SOTA performance. Our experiments on three benchmark datasets CIFAR10-LT,CIFAR100-LT and ImageNet-LT demonstrate the richness of the learnt embeddings and increased top-1 balanced accuracy RCL provides to the BCL framework. We further demonstrate that the performance of RCL as a standalone loss also achieves state-of-the-art level accuracy.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"657 \",\"pages\":\"Article 131601\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2025-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231225022738\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225022738","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

将有监督对比损失与基于交叉熵的分类相结合,是一种解决长尾学习问题的方法。然而,当类失衡比例较高时,由于传统的对比学习默认偏向头部类,需要调整监督对比损失来支持尾部类。为此,我们提出了再平衡对比学习(RCL),这是一种通过解决三个主要方面来提高长尾分类精度的有效方法。特征空间平衡-在所有类中平分特征空间。类内紧密性-减少相同类嵌入之间的距离正则化——为尾部类强制更大的边距以减少过拟合。RCL采用基于类频率的SoftMax损失平衡来监督对比学习损失,并利用反馈给对比学习损失的标量乘法特征来增强紧凑性。我们在平衡对比学习框架(BCL)上实现了RCL,该框架具有SOTA性能。我们在三个基准数据集CIFAR10-LT、CIFAR100-LT和ImageNet-LT上的实验表明,RCL为BCL框架提供了丰富的学习嵌入和提高的top-1平衡精度。我们进一步证明,RCL作为独立损失的性能也达到了最先进的精度水平。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Long-tail learning with rebalanced contrastive loss
Integrating supervised contrastive loss to cross entropy-based classification has recently been proposed as a solution to address the long-tail learning problem. However, when the class imbalance ratio is high, it requires adjusting the supervised contrastive loss to support the tail classes, as the conventional contrastive learning is biased towards head classes by default. To this end, we present Rebalanced Contrastive Learning (RCL), an efficient means to increase the long-tail classification accuracy by addressing three main aspects: 1. Feature space balancedness – Equal division of the feature space among all the classes 2. Intra-Class compactness – Reducing the distance between same-class embeddings 3. Regularization – Enforcing larger margins for tail classes to reduce overfitting. RCL adopts class frequency-based SoftMax loss balancing to supervised contrastive learning loss and exploits scalar multiplied features fed to the contrastive learning loss to enforce compactness. We implement RCL on the Balanced Contrastive Learning (BCL) Framework, which has the SOTA performance. Our experiments on three benchmark datasets CIFAR10-LT,CIFAR100-LT and ImageNet-LT demonstrate the richness of the learnt embeddings and increased top-1 balanced accuracy RCL provides to the BCL framework. We further demonstrate that the performance of RCL as a standalone loss also achieves state-of-the-art level accuracy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信