AmbiBias Contrast: Enhancing debiasing networks via disentangled space from ambiguity-bias clusters

IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Suneung Kim, Seong-Whan Lee
{"title":"AmbiBias Contrast: Enhancing debiasing networks via disentangled space from ambiguity-bias clusters","authors":"Suneung Kim,&nbsp;Seong-Whan Lee","doi":"10.1016/j.neunet.2024.106857","DOIUrl":null,"url":null,"abstract":"<div><div>The goal of debiasing in classification tasks is to train models to be less sensitive to correlations between a sample’s target attribution and periodically occurring contextual attributes to achieve accurate classification. A prevalent method involves applying re-weighing techniques to lower the weight of bias-aligned samples that contribute to bias, thereby focusing the training on bias-conflicting samples that deviate from the bias patterns. Our empirical analysis indicates that this approach is effective in datasets where bias-conflicting samples constitute a minority compared to bias-aligned samples, yet its effectiveness diminishes in datasets with similar proportions of both. This ineffectiveness in varied dataset compositions suggests that the traditional method cannot be practical in diverse environments as it overlooks the dynamic nature of dataset-induced biases. To address this issue, we introduce a contrastive approach named “AmbiBias Contrast”, which is robust across various dataset compositions. This method accounts for “ambiguity bias”— the variable nature of bias elements across datasets, which cannot be clearly defined. Given the challenge of defining bias due to the fluctuating compositions of datasets, we designed a method of representation learning that accommodates this ambiguity. Our experiments across a range of and dataset configurations verify the robustness of our method, delivering state-of-the-art performance.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106857"},"PeriodicalIF":6.0000,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608024007810","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The goal of debiasing in classification tasks is to train models to be less sensitive to correlations between a sample’s target attribution and periodically occurring contextual attributes to achieve accurate classification. A prevalent method involves applying re-weighing techniques to lower the weight of bias-aligned samples that contribute to bias, thereby focusing the training on bias-conflicting samples that deviate from the bias patterns. Our empirical analysis indicates that this approach is effective in datasets where bias-conflicting samples constitute a minority compared to bias-aligned samples, yet its effectiveness diminishes in datasets with similar proportions of both. This ineffectiveness in varied dataset compositions suggests that the traditional method cannot be practical in diverse environments as it overlooks the dynamic nature of dataset-induced biases. To address this issue, we introduce a contrastive approach named “AmbiBias Contrast”, which is robust across various dataset compositions. This method accounts for “ambiguity bias”— the variable nature of bias elements across datasets, which cannot be clearly defined. Given the challenge of defining bias due to the fluctuating compositions of datasets, we designed a method of representation learning that accommodates this ambiguity. Our experiments across a range of and dataset configurations verify the robustness of our method, delivering state-of-the-art performance.
AmbiBias 对比:通过来自模糊偏置群组的分离空间来增强去噪网络。
在分类任务中,去偏差的目的是训练模型,使其对样本的目标属性与周期性出现的上下文属性之间的相关性不那么敏感,从而实现准确分类。一种流行的方法是应用重新加权技术来降低造成偏差的偏差对齐样本的权重,从而将训练重点放在偏离偏差模式的偏差冲突样本上。我们的实证分析表明,与偏差对齐样本相比,在偏差冲突样本占少数的数据集中,这种方法是有效的,但在两者比例相近的数据集中,这种方法的有效性就会降低。这种在不同数据集组成中的无效性表明,传统方法忽视了数据集引起的偏差的动态性质,因此在不同环境中无法实用。为了解决这个问题,我们引入了一种名为 "AmbiBias 对比 "的对比方法,这种方法在不同的数据集组成中都能保持稳定。这种方法考虑到了 "模糊偏差"--不同数据集的偏差元素性质各异,无法明确定义。鉴于数据集组成的波动性给偏差定义带来的挑战,我们设计了一种表征学习方法,以适应这种模糊性。我们在一系列数据集配置中进行的实验验证了我们方法的稳健性,并提供了最先进的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信