Systematic literature review on bias mitigation in generative AI

Juveria Afreen, Mahsa Mohaghegh, Maryam Doborjeh
{"title":"Systematic literature review on bias mitigation in generative AI","authors":"Juveria Afreen,&nbsp;Mahsa Mohaghegh,&nbsp;Maryam Doborjeh","doi":"10.1007/s43681-025-00721-9","DOIUrl":null,"url":null,"abstract":"<div><p>In the era of rapid technological advancement, Artificial Intelligence (AI) is a transformative force, permeating diverse facets of society. However, bias concerns have gained prominence as AI systems become integral to decision-making processes. Bias can exert significant and extensive consequences, influencing individuals, groups, and society. The presence of bias in generative AI or machine learning systems can produce content that exhibits discriminating tendencies, perpetuates stereotypes, and contributes to inequalities. Artificial intelligence (AI) systems have the potential to be employed in various contexts that involve sensitive settings, where they are tasked with making significant judgements that can have profound impacts on individuals' lives. Consequently, it is important to establish measures that prevent these decisions from exhibiting discriminating tendencies against specific groups or populations. This exclusive exploration embarks on a comprehensive journey through the nuanced landscape of bias in AI, unravelling its intricate layers to discern different types, pinpoint underlying causes, and illuminate innovative mitigation strategies. Delving deeper, we investigate the roots of bias in AI, revealing a complex interplay of historical legacies, societal imbalances, and algorithmic intricacies. Unravelling the causes involves exploring unintentional reinforcement of existing biases, reliance on incomplete or biased training data, and the potential amplification of disparities when AI systems are deployed in diverse real-world scenarios. Various domains such as text, image, audio, video and more significant advancements in Generative Artificial Intelligence (GAI) were evidenced. Multiple challenges and proliferation of biases occur in different perspectives considered in the study. Against this backdrop, the exploration transitions to a proactive stance, offering a glimpse into cutting-edge mitigation strategies. Diverse and inclusive datasets emerge as a cornerstone, ensuring representative input for AI models. Ethical considerations throughout the development lifecycle and ongoing monitoring mechanisms prove pivotal in mitigating biases that may arise during training or deployment. Technical and non-technical strategies come to the forefront of pursuing fairness and equity in AI. The paper underscores the importance of interdisciplinary collaboration, emphasising that a collective effort spanning developers, ethicists, policymakers, and end-users is paramount for effective bias mitigation. As AI continues its ascent into various spheres of our lives, understanding, acknowledging, and addressing bias becomes an imperative. This exploration seeks to contribute to the discourse, fostering a deeper comprehension of the challenges posed by bias in AI and inspiring a collective commitment to building equitable, trustworthy AI systems for the future.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4789 - 4841"},"PeriodicalIF":0.0000,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00721-9.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-025-00721-9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In the era of rapid technological advancement, Artificial Intelligence (AI) is a transformative force, permeating diverse facets of society. However, bias concerns have gained prominence as AI systems become integral to decision-making processes. Bias can exert significant and extensive consequences, influencing individuals, groups, and society. The presence of bias in generative AI or machine learning systems can produce content that exhibits discriminating tendencies, perpetuates stereotypes, and contributes to inequalities. Artificial intelligence (AI) systems have the potential to be employed in various contexts that involve sensitive settings, where they are tasked with making significant judgements that can have profound impacts on individuals' lives. Consequently, it is important to establish measures that prevent these decisions from exhibiting discriminating tendencies against specific groups or populations. This exclusive exploration embarks on a comprehensive journey through the nuanced landscape of bias in AI, unravelling its intricate layers to discern different types, pinpoint underlying causes, and illuminate innovative mitigation strategies. Delving deeper, we investigate the roots of bias in AI, revealing a complex interplay of historical legacies, societal imbalances, and algorithmic intricacies. Unravelling the causes involves exploring unintentional reinforcement of existing biases, reliance on incomplete or biased training data, and the potential amplification of disparities when AI systems are deployed in diverse real-world scenarios. Various domains such as text, image, audio, video and more significant advancements in Generative Artificial Intelligence (GAI) were evidenced. Multiple challenges and proliferation of biases occur in different perspectives considered in the study. Against this backdrop, the exploration transitions to a proactive stance, offering a glimpse into cutting-edge mitigation strategies. Diverse and inclusive datasets emerge as a cornerstone, ensuring representative input for AI models. Ethical considerations throughout the development lifecycle and ongoing monitoring mechanisms prove pivotal in mitigating biases that may arise during training or deployment. Technical and non-technical strategies come to the forefront of pursuing fairness and equity in AI. The paper underscores the importance of interdisciplinary collaboration, emphasising that a collective effort spanning developers, ethicists, policymakers, and end-users is paramount for effective bias mitigation. As AI continues its ascent into various spheres of our lives, understanding, acknowledging, and addressing bias becomes an imperative. This exploration seeks to contribute to the discourse, fostering a deeper comprehension of the challenges posed by bias in AI and inspiring a collective commitment to building equitable, trustworthy AI systems for the future.

生成式人工智能中偏见缓解的系统文献综述
在技术快速发展的时代,人工智能(AI)是一股变革力量,渗透到社会的各个方面。然而,随着人工智能系统成为决策过程不可或缺的一部分,对偏见的担忧日益突出。偏见可以产生重大而广泛的后果,影响个人、群体和社会。生成式人工智能或机器学习系统中存在的偏见可能会产生具有歧视性倾向的内容,使刻板印象永久化,并导致 不平等。人工智能(AI)系统有可能应用于涉及敏感环境的各种环境,在这些环境中,它们的任务是做出可能对个人生活产生深远影响的重大判断。因此,必须制定措施,防止这些决定表现出对特定群体或人口的歧视倾向。这一独家探索开启了一段全面的旅程,穿越人工智能中微妙的偏见景观,揭开其复杂的层面,以辨别不同类型,找出潜在原因,并阐明创新的缓解策略。我们深入研究了人工智能中偏见的根源,揭示了历史遗产、社会不平衡和算法复杂性之间复杂的相互作用。要找出原因,需要探索现有偏见的无意强化、对不完整或有偏见的训练数据的依赖,以及当人工智能系统部署在不同的现实世界场景中时,差异的潜在放大。文本、图像、音频、视频等各个领域以及生成式人工智能(GAI)的更重大进展都得到了证明。多重挑战和偏见的扩散发生在研究中考虑的不同角度。在这种背景下,探索转变为积极主动的姿态,提供了最前沿的缓解策略的一瞥。多样化和包容性的数据集成为基石,确保人工智能模型的代表性输入。贯穿整个开发生命周期的伦理考虑和持续的监控机制在减轻培训或部署期间可能出现的偏差方面证明是关键的。在人工智能领域,技术和非技术策略都是追求公平和公正的首要因素。这篇论文强调了跨学科合作的重要性,强调了跨越开发者、伦理学家、政策制定者和最终用户的集体努力对于有效减少偏见至关重要。随着人工智能继续进入我们生活的各个领域,理解、承认和解决偏见变得势在必行。这一探索旨在为讨论做出贡献,促进对人工智能偏见带来的挑战的更深入理解,并激发集体承诺,为未来建立公平、值得信赖的人工智能系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信