SECL: Sampling enhanced contrastive learning

IF 1.4 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yixin Tang, Hua Cheng, Yiquan Fang, Tao Cheng
{"title":"SECL: Sampling enhanced contrastive learning","authors":"Yixin Tang, Hua Cheng, Yiquan Fang, Tao Cheng","doi":"10.3233/aic-210234","DOIUrl":null,"url":null,"abstract":"Instance-level contrastive learning such as SimCLR has been successful as a powerful method for representation learning. However, SimCLR suffers from problems of sampling bias, feature bias and model collapse. A set-level based Sampling Enhanced Contrastive Learning based on SimCLR (SECL) is proposed in this paper. We use the proposed super-sampling method to expand the augmented samples into a contrastive-positive set, which can learn class features of the target sample to reduce the bias. The contrastive-positive set includes Augmentations (the original augmented samples) and Neighbors (the super-sampled samples).We also introduce a samples-correlation strategy to prevent model collapse, where a positive correlation loss or a negative correlation loss is computed to adjust the balance of model’s Alignment and Uniformity. SECL reaches 94.14% classification precision on SST-2 dataset and 89.25% on ARSC dataset. For the multi-class classification task, SECL achieves 90.99% on AGNews dataset. They are all about 1% higher than the precision of SimCLR. Experiments show that the training convergence of SECL is faster, and SECL reduces the risk of bias and model collapse.","PeriodicalId":50835,"journal":{"name":"AI Communications","volume":"34 1","pages":"1-12"},"PeriodicalIF":1.4000,"publicationDate":"2022-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI Communications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.3233/aic-210234","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Instance-level contrastive learning such as SimCLR has been successful as a powerful method for representation learning. However, SimCLR suffers from problems of sampling bias, feature bias and model collapse. A set-level based Sampling Enhanced Contrastive Learning based on SimCLR (SECL) is proposed in this paper. We use the proposed super-sampling method to expand the augmented samples into a contrastive-positive set, which can learn class features of the target sample to reduce the bias. The contrastive-positive set includes Augmentations (the original augmented samples) and Neighbors (the super-sampled samples).We also introduce a samples-correlation strategy to prevent model collapse, where a positive correlation loss or a negative correlation loss is computed to adjust the balance of model’s Alignment and Uniformity. SECL reaches 94.14% classification precision on SST-2 dataset and 89.25% on ARSC dataset. For the multi-class classification task, SECL achieves 90.99% on AGNews dataset. They are all about 1% higher than the precision of SimCLR. Experiments show that the training convergence of SECL is faster, and SECL reduces the risk of bias and model collapse.
SECL:抽样增强对比学习
实例级对比学习(例如SimCLR)作为一种强大的表示学习方法已经取得了成功。然而,SimCLR存在采样偏差、特征偏差和模型崩溃等问题。提出了一种基于SimCLR (SECL)的集水平采样增强对比学习方法。我们使用所提出的超抽样方法将增广样本扩展成一个对比正集,它可以学习目标样本的类别特征以减小偏差。对比正集包括增强(原始增强样本)和邻居(超采样样本)。我们还引入了一种样本相关策略来防止模型崩溃,其中计算正相关损失或负相关损失来调整模型的对齐和均匀性的平衡。SECL在SST-2数据集上达到94.14%的分类精度,在ARSC数据集上达到89.25%的分类精度。对于多类分类任务,SECL在AGNews数据集上达到了90.99%。它们都比SimCLR的精度高1%左右。实验表明,SECL的训练收敛速度更快,并且降低了偏差和模型崩溃的风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
AI Communications
AI Communications 工程技术-计算机:人工智能
CiteScore
2.30
自引率
12.50%
发文量
34
审稿时长
4.5 months
期刊介绍: AI Communications is a journal on artificial intelligence (AI) which has a close relationship to EurAI (European Association for Artificial Intelligence, formerly ECCAI). It covers the whole AI community: Scientific institutions as well as commercial and industrial companies. AI Communications aims to enhance contacts and information exchange between AI researchers and developers, and to provide supranational information to those concerned with AI and advanced information processing. AI Communications publishes refereed articles concerning scientific and technical AI procedures, provided they are of sufficient interest to a large readership of both scientific and practical background. In addition it contains high-level background material, both at the technical level as well as the level of opinions, policies and news.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信