Gender bias in transformers: A comprehensive review of detection and mitigation strategies

Praneeth Nemani , Yericherla Deepak Joel , Palla Vijay , Farhana Ferdouzi Liza
{"title":"Gender bias in transformers: A comprehensive review of detection and mitigation strategies","authors":"Praneeth Nemani ,&nbsp;Yericherla Deepak Joel ,&nbsp;Palla Vijay ,&nbsp;Farhana Ferdouzi Liza","doi":"10.1016/j.nlp.2023.100047","DOIUrl":null,"url":null,"abstract":"<div><p>Gender bias in artificial intelligence (AI) has emerged as a pressing concern with profound implications for individuals’ lives. This paper presents a comprehensive survey that explores gender bias in Transformer models from a linguistic perspective. While the existence of gender bias in language models has been acknowledged in previous studies, there remains a lack of consensus on how to measure and evaluate this bias effectively. Our survey critically examines the existing literature on gender bias in Transformers, shedding light on the diverse methodologies and metrics employed to assess bias. Several limitations in current approaches to measuring gender bias in Transformers are identified, encompassing the utilization of incomplete or flawed metrics, inadequate dataset sizes, and a dearth of standardization in evaluation methods. Furthermore, our survey delves into the potential ramifications of gender bias in Transformers for downstream applications, including dialogue systems and machine translation. We underscore the importance of fostering equity and fairness in these systems by emphasizing the need for heightened awareness and accountability in developing and deploying language technologies. This paper serves as a comprehensive overview of gender bias in Transformer models, providing novel insights and offering valuable directions for future research in this critical domain.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"6 ","pages":"Article 100047"},"PeriodicalIF":0.0000,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719123000444/pdfft?md5=bfc905884945510f2b2e207d895b481c&pid=1-s2.0-S2949719123000444-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Natural Language Processing Journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949719123000444","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Gender bias in artificial intelligence (AI) has emerged as a pressing concern with profound implications for individuals’ lives. This paper presents a comprehensive survey that explores gender bias in Transformer models from a linguistic perspective. While the existence of gender bias in language models has been acknowledged in previous studies, there remains a lack of consensus on how to measure and evaluate this bias effectively. Our survey critically examines the existing literature on gender bias in Transformers, shedding light on the diverse methodologies and metrics employed to assess bias. Several limitations in current approaches to measuring gender bias in Transformers are identified, encompassing the utilization of incomplete or flawed metrics, inadequate dataset sizes, and a dearth of standardization in evaluation methods. Furthermore, our survey delves into the potential ramifications of gender bias in Transformers for downstream applications, including dialogue systems and machine translation. We underscore the importance of fostering equity and fairness in these systems by emphasizing the need for heightened awareness and accountability in developing and deploying language technologies. This paper serves as a comprehensive overview of gender bias in Transformer models, providing novel insights and offering valuable directions for future research in this critical domain.

变压器中的性别偏见:全面审查检测和缓解战略
人工智能(AI)中的性别偏见已成为一个亟待解决的问题,并对个人生活产生了深远影响。本文从语言学的角度出发,对 Transformer 模型中的性别偏见进行了全面调查。虽然以往的研究已经承认语言模型中存在性别偏见,但对于如何有效测量和评估这种偏见仍缺乏共识。我们的调查批判性地研究了有关变形金刚中性别偏见的现有文献,揭示了评估偏见所采用的各种方法和指标。调查发现,目前衡量变形金刚中性别偏见的方法存在一些局限性,包括使用不完整或有缺陷的衡量标准、数据集规模不足以及评估方法缺乏标准化。此外,我们的调查还深入探讨了 Transformers 中的性别偏见对下游应用(包括对话系统和机器翻译)的潜在影响。我们强调了在开发和部署语言技术时提高意识和责任感的必要性,从而强调了在这些系统中促进公平和公正的重要性。本文全面概述了 Transformer 模型中的性别偏见,提供了新颖的见解,并为这一关键领域的未来研究提供了宝贵的方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信