A bias evaluation solution for multiple sensitive attribute speech recognition

IF 3.1 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zigang Chen , Yuening Zhou , Zhen Wang , Fan Liu , Tao Leng , Haihua Zhu
{"title":"A bias evaluation solution for multiple sensitive attribute speech recognition","authors":"Zigang Chen ,&nbsp;Yuening Zhou ,&nbsp;Zhen Wang ,&nbsp;Fan Liu ,&nbsp;Tao Leng ,&nbsp;Haihua Zhu","doi":"10.1016/j.csl.2025.101787","DOIUrl":null,"url":null,"abstract":"<div><div>Speech recognition systems are a pervasive application in the field of <span><math><mrow><mi>A</mi><mi>I</mi></mrow></math></span> (Artificial Intelligence), bringing significant benefits to society. However, they also face significant fairness issues. When dealing with groups of people with different sensitive attributes, these systems tend to exhibit bias, which may lead to the misinterpretation or ignoring of the voice of a specific group of people. In order to address the fairness issue, it becomes crucial to comprehensively reveal the presence of bias in AI systems. To address the issues of limited categories and data imbalance in existing bias evaluation datasets, we propose a new method for constructing evaluation datasets. Given the unique characteristics of speech recognition systems, we find that existing AI bias evaluation methods may not be directly applicable. Therefore, we introduce a bias evaluation method for speech recognition systems based on <span><math><mrow><mi>W</mi><mi>E</mi><mi>R</mi></mrow></math></span> (Word Error Rate). To comprehensively quantify bias across different groups, we combine multiple evaluation metrics, including WER, fairness metrics, and <span><math><mrow><mi>C</mi><mi>M</mi><mi>B</mi><mi>M</mi></mrow></math></span> (confusion matrix-based metrics). To ensure a thorough evaluation, experiments were conducted on both single sensitive attributes and cross-sensitive attributes. The experimental results indicate that, for single sensitive attributes, the speech recognition system exhibits the most significant racial bias, while in the evaluation of cross-sensitive attributes, the system shows the greatest bias against white males and black males. Finally, through T-tests, we demonstrate that the WER differences between these two groups are statistically significant.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"93 ","pages":"Article 101787"},"PeriodicalIF":3.1000,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Speech and Language","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885230825000129","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Speech recognition systems are a pervasive application in the field of AI (Artificial Intelligence), bringing significant benefits to society. However, they also face significant fairness issues. When dealing with groups of people with different sensitive attributes, these systems tend to exhibit bias, which may lead to the misinterpretation or ignoring of the voice of a specific group of people. In order to address the fairness issue, it becomes crucial to comprehensively reveal the presence of bias in AI systems. To address the issues of limited categories and data imbalance in existing bias evaluation datasets, we propose a new method for constructing evaluation datasets. Given the unique characteristics of speech recognition systems, we find that existing AI bias evaluation methods may not be directly applicable. Therefore, we introduce a bias evaluation method for speech recognition systems based on WER (Word Error Rate). To comprehensively quantify bias across different groups, we combine multiple evaluation metrics, including WER, fairness metrics, and CMBM (confusion matrix-based metrics). To ensure a thorough evaluation, experiments were conducted on both single sensitive attributes and cross-sensitive attributes. The experimental results indicate that, for single sensitive attributes, the speech recognition system exhibits the most significant racial bias, while in the evaluation of cross-sensitive attributes, the system shows the greatest bias against white males and black males. Finally, through T-tests, we demonstrate that the WER differences between these two groups are statistically significant.
多敏感属性语音识别的偏差评估方法
语音识别系统是人工智能领域的一个广泛应用,为社会带来了巨大的效益。然而,它们也面临着重大的公平问题。当处理具有不同敏感属性的人群时,这些系统往往会表现出偏见,这可能导致误解或忽略特定人群的声音。为了解决公平问题,全面揭示人工智能系统中存在的偏见变得至关重要。针对现有偏倚评价数据集类别有限和数据不平衡的问题,提出了一种构建评价数据集的新方法。鉴于语音识别系统的独特特性,我们发现现有的AI偏差评估方法可能并不直接适用。因此,我们提出了一种基于错误率的语音识别系统偏差评估方法。为了全面量化不同群体的偏见,我们结合了多个评估指标,包括WER、公平性指标和基于混淆矩阵的指标。为了确保评估的深入,我们对单敏感属性和交叉敏感属性进行了实验。实验结果表明,对于单一敏感属性,语音识别系统表现出最显著的种族偏见,而在跨敏感属性评估中,系统对白人男性和黑人男性表现出最大的偏见。最后,通过t检验,我们证明两组之间的WER差异具有统计学意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computer Speech and Language
Computer Speech and Language 工程技术-计算机:人工智能
CiteScore
11.30
自引率
4.70%
发文量
80
审稿时长
22.9 weeks
期刊介绍: Computer Speech & Language publishes reports of original research related to the recognition, understanding, production, coding and mining of speech and language. The speech and language sciences have a long history, but it is only relatively recently that large-scale implementation of and experimentation with complex models of speech and language processing has become feasible. Such research is often carried out somewhat separately by practitioners of artificial intelligence, computer science, electronic engineering, information retrieval, linguistics, phonetics, or psychology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信