Bipol: A novel multi-axes bias evaluation metric with explainability for NLP

Lama Alkhaled , Tosin Adewumi , Sana Sabah Sabry
{"title":"Bipol: A novel multi-axes bias evaluation metric with explainability for NLP","authors":"Lama Alkhaled ,&nbsp;Tosin Adewumi ,&nbsp;Sana Sabah Sabry","doi":"10.1016/j.nlp.2023.100030","DOIUrl":null,"url":null,"abstract":"<div><p>We introduce bipol, a new metric with explainability, for estimating social bias in text data. Harmful bias is prevalent in many online sources of data that are used for training machine learning (ML) models. In a step to address this challenge we create a novel metric that involves a two-step process: corpus-level evaluation based on model classification and sentence-level evaluation based on (sensitive) term frequency (TF). After creating new models to classify bias using SotA architectures, we evaluate two popular NLP datasets (COPA and SQuADv2) and the WinoBias dataset. As additional contribution, we created a large English dataset (with almost 2 million labeled samples) for training models in bias classification and make it publicly available. We also make public our codes.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"4 ","pages":"Article 100030"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Natural Language Processing Journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949719123000274","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We introduce bipol, a new metric with explainability, for estimating social bias in text data. Harmful bias is prevalent in many online sources of data that are used for training machine learning (ML) models. In a step to address this challenge we create a novel metric that involves a two-step process: corpus-level evaluation based on model classification and sentence-level evaluation based on (sensitive) term frequency (TF). After creating new models to classify bias using SotA architectures, we evaluate two popular NLP datasets (COPA and SQuADv2) and the WinoBias dataset. As additional contribution, we created a large English dataset (with almost 2 million labeled samples) for training models in bias classification and make it publicly available. We also make public our codes.

Bipol:一种新的可解释NLP的多轴偏差评估度量
我们引入了bipol,这是一种新的可解释性度量,用于估计文本数据中的社会偏见。有害的偏见在用于训练机器学习(ML)模型的许多在线数据源中普遍存在。为了应对这一挑战,我们创建了一个新的度量,该度量涉及两步过程:基于模型分类的语料库级评估和基于(敏感)术语频率(TF)的句子级评估。在使用SotA架构创建了新的模型来对偏差进行分类后,我们评估了两个流行的NLP数据集(COPA和SQuADv2)和WinoBias数据集。作为额外的贡献,我们创建了一个大型英文数据集(有近200万个标记样本),用于训练偏差分类模型,并将其公开。我们也公开我们的代码。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信