A Comprehensive Review of Bias in Deep Learning Models: Methods, Impacts, and Future Directions

IF 9.7 2区 工程技术 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Milind Shah, Nitesh Sureja
{"title":"A Comprehensive Review of Bias in Deep Learning Models: Methods, Impacts, and Future Directions","authors":"Milind Shah,&nbsp;Nitesh Sureja","doi":"10.1007/s11831-024-10134-2","DOIUrl":null,"url":null,"abstract":"<div><p>This comprehensive review and analysis delve into the intricate facets of bias within the realm of deep learning. As artificial intelligence and machine learning technologies become increasingly integrated into our lives, understanding and mitigating bias in these systems is of paramount importance. This paper scrutinizes the multifaceted nature of bias, encompassing data bias, algorithmic bias, and societal bias, and explores the interconnectedness among these dimensions. Through an exploration of existing literature and recent advancements in the field, this paper offers a critical assessment of various bias mitigation techniques. It examines the challenges faced in addressing bias and emphasizes the need for an intersectional and inclusive approach to effectively rectify disparities. Furthermore, this review underscores the importance of ethical considerations in the development and deployment of deep learning models. It highlights the necessity of diverse representation in data, fairness-aware algorithms, and interpretability as key elements in creating bias-free AI systems. By synthesizing existing research and providing a holistic overview of bias in deep learning, this paper aims to contribute to the ongoing discourse on mitigating bias and fostering equity in artificial intelligence systems. The insights presented herein can serve as a foundation for future research and as a guide for practitioners, policymakers, and stakeholders to navigate the complex landscape of bias in deep learning.</p></div>","PeriodicalId":55473,"journal":{"name":"Archives of Computational Methods in Engineering","volume":"32 1","pages":"255 - 267"},"PeriodicalIF":9.7000,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Archives of Computational Methods in Engineering","FirstCategoryId":"5","ListUrlMain":"https://link.springer.com/article/10.1007/s11831-024-10134-2","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

This comprehensive review and analysis delve into the intricate facets of bias within the realm of deep learning. As artificial intelligence and machine learning technologies become increasingly integrated into our lives, understanding and mitigating bias in these systems is of paramount importance. This paper scrutinizes the multifaceted nature of bias, encompassing data bias, algorithmic bias, and societal bias, and explores the interconnectedness among these dimensions. Through an exploration of existing literature and recent advancements in the field, this paper offers a critical assessment of various bias mitigation techniques. It examines the challenges faced in addressing bias and emphasizes the need for an intersectional and inclusive approach to effectively rectify disparities. Furthermore, this review underscores the importance of ethical considerations in the development and deployment of deep learning models. It highlights the necessity of diverse representation in data, fairness-aware algorithms, and interpretability as key elements in creating bias-free AI systems. By synthesizing existing research and providing a holistic overview of bias in deep learning, this paper aims to contribute to the ongoing discourse on mitigating bias and fostering equity in artificial intelligence systems. The insights presented herein can serve as a foundation for future research and as a guide for practitioners, policymakers, and stakeholders to navigate the complex landscape of bias in deep learning.

Abstract Image

Abstract Image

深度学习模型中的偏差综述:方法、影响和未来方向
这篇全面的评论和分析深入探讨了深度学习领域中错综复杂的偏见问题。随着人工智能和机器学习技术日益融入我们的生活,了解和减少这些系统中的偏见至关重要。本文仔细研究了偏见的多面性,包括数据偏见、算法偏见和社会偏见,并探讨了这些方面之间的相互联系。通过对该领域现有文献和最新进展的探索,本文对各种减轻偏见的技术进行了批判性评估。它探讨了在解决偏见问题时所面临的挑战,并强调有必要采用一种交叉性和包容性的方法来有效纠正差异。此外,本综述还强调了在开发和部署深度学习模型过程中伦理考虑的重要性。它强调了数据多样化表示、公平意识算法和可解释性作为创建无偏见人工智能系统关键要素的必要性。通过综合现有研究并对深度学习中的偏差进行全面概述,本文旨在为当前有关减轻偏差和促进人工智能系统公平的讨论做出贡献。本文提出的见解可作为未来研究的基础,也可作为从业人员、政策制定者和利益相关者的指南,帮助他们驾驭深度学习中存在偏见的复杂局面。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
19.80
自引率
4.10%
发文量
153
审稿时长
>12 weeks
期刊介绍: Archives of Computational Methods in Engineering Aim and Scope: Archives of Computational Methods in Engineering serves as an active forum for disseminating research and advanced practices in computational engineering, particularly focusing on mechanics and related fields. The journal emphasizes extended state-of-the-art reviews in selected areas, a unique feature of its publication. Review Format: Reviews published in the journal offer: A survey of current literature Critical exposition of topics in their full complexity By organizing the information in this manner, readers can quickly grasp the focus, coverage, and unique features of the Archives of Computational Methods in Engineering.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信