{"title":"AmbiBias Contrast: Enhancing debiasing networks via disentangled space from ambiguity-bias clusters","authors":"Suneung Kim, Seong-Whan Lee","doi":"10.1016/j.neunet.2024.106857","DOIUrl":null,"url":null,"abstract":"<div><div>The goal of debiasing in classification tasks is to train models to be less sensitive to correlations between a sample’s target attribution and periodically occurring contextual attributes to achieve accurate classification. A prevalent method involves applying re-weighing techniques to lower the weight of bias-aligned samples that contribute to bias, thereby focusing the training on bias-conflicting samples that deviate from the bias patterns. Our empirical analysis indicates that this approach is effective in datasets where bias-conflicting samples constitute a minority compared to bias-aligned samples, yet its effectiveness diminishes in datasets with similar proportions of both. This ineffectiveness in varied dataset compositions suggests that the traditional method cannot be practical in diverse environments as it overlooks the dynamic nature of dataset-induced biases. To address this issue, we introduce a contrastive approach named “AmbiBias Contrast”, which is robust across various dataset compositions. This method accounts for “ambiguity bias”— the variable nature of bias elements across datasets, which cannot be clearly defined. Given the challenge of defining bias due to the fluctuating compositions of datasets, we designed a method of representation learning that accommodates this ambiguity. Our experiments across a range of and dataset configurations verify the robustness of our method, delivering state-of-the-art performance.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106857"},"PeriodicalIF":6.0000,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608024007810","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The goal of debiasing in classification tasks is to train models to be less sensitive to correlations between a sample’s target attribution and periodically occurring contextual attributes to achieve accurate classification. A prevalent method involves applying re-weighing techniques to lower the weight of bias-aligned samples that contribute to bias, thereby focusing the training on bias-conflicting samples that deviate from the bias patterns. Our empirical analysis indicates that this approach is effective in datasets where bias-conflicting samples constitute a minority compared to bias-aligned samples, yet its effectiveness diminishes in datasets with similar proportions of both. This ineffectiveness in varied dataset compositions suggests that the traditional method cannot be practical in diverse environments as it overlooks the dynamic nature of dataset-induced biases. To address this issue, we introduce a contrastive approach named “AmbiBias Contrast”, which is robust across various dataset compositions. This method accounts for “ambiguity bias”— the variable nature of bias elements across datasets, which cannot be clearly defined. Given the challenge of defining bias due to the fluctuating compositions of datasets, we designed a method of representation learning that accommodates this ambiguity. Our experiments across a range of and dataset configurations verify the robustness of our method, delivering state-of-the-art performance.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.