Explainable AI: Interpretable Models for Transparent Decision-Making

Kamlesh Kumawat
{"title":"Explainable AI: Interpretable Models for Transparent Decision-Making","authors":"Kamlesh Kumawat","doi":"10.48047/ijfans/09/03/30","DOIUrl":null,"url":null,"abstract":"The quest for obvious and interpretable choice-making has turn out to be paramount in an technology ruled by way of the vast use of complicated AI structures. Explainable AI (XAI) emerges as a pivotal area addressing this vital want by using growing fashions and strategies that shed light at the enigmatic reasoning at the back of AI-pushed conclusions. This paper illuminates the Explainable AI landscape, defining its importance, strategies, and packages throughout a couple of domains. The dialogue moves through the coronary heart of XAI, elucidating its two factors: interpretable fashions and put up-hoc factors. In the previous, it investigates models which can be inherently designed for explicable results, such as decision timber or linear fashions. Meanwhile, the latter segment examines put up-modelling techniques which include function importance or SHAP values to decipher the underlying good judgment of black-box algorithms inclusive of neural networks. Furthermore, it surveys present day research efforts and forecasts future directions, imagining a path in which XAI now not best improves version transparency but additionally promotes human-AI collaboration. Explainable AI addresses the pressing need for accountability and believe by means of deciphering the intent behind AI decisions, at the same time as also charting a path towards understandable, ethical, and dependable AI structures, revolutionizing the landscape of AI-pushed decision-making.","PeriodicalId":290296,"journal":{"name":"International Journal of Food and Nutritional Sciences","volume":"42 30","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Food and Nutritional Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48047/ijfans/09/03/30","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The quest for obvious and interpretable choice-making has turn out to be paramount in an technology ruled by way of the vast use of complicated AI structures. Explainable AI (XAI) emerges as a pivotal area addressing this vital want by using growing fashions and strategies that shed light at the enigmatic reasoning at the back of AI-pushed conclusions. This paper illuminates the Explainable AI landscape, defining its importance, strategies, and packages throughout a couple of domains. The dialogue moves through the coronary heart of XAI, elucidating its two factors: interpretable fashions and put up-hoc factors. In the previous, it investigates models which can be inherently designed for explicable results, such as decision timber or linear fashions. Meanwhile, the latter segment examines put up-modelling techniques which include function importance or SHAP values to decipher the underlying good judgment of black-box algorithms inclusive of neural networks. Furthermore, it surveys present day research efforts and forecasts future directions, imagining a path in which XAI now not best improves version transparency but additionally promotes human-AI collaboration. Explainable AI addresses the pressing need for accountability and believe by means of deciphering the intent behind AI decisions, at the same time as also charting a path towards understandable, ethical, and dependable AI structures, revolutionizing the landscape of AI-pushed decision-making.
可解释的人工智能:透明决策的可解释模型
在大量使用复杂人工智能结构的技术中,寻求明显且可解释的选择已变得至关重要。可解释的人工智能(XAI)是解决这一重要需求的关键领域,它采用不断发展的方法和策略,揭示了人工智能推动结论背后的神秘推理。本文阐明了可解释人工智能的发展前景,定义了其重要性、策略以及贯穿多个领域的软件包。对话贯穿了 XAI 的核心,阐明了它的两个因素:可解释的模型和临时设置的因素。在前者,它研究了本质上为可解释结果而设计的模型,如决策木或线性模型。与此同时,后一部分研究了包括功能重要性或 SHAP 值在内的建模技术,以解读包括神经网络在内的黑盒算法的基本良好判断。此外,它还调查了当前的研究工作并预测了未来的发展方向,设想了一条道路,在这条道路上,XAI 不仅能提高版本的透明度,还能促进人类与人工智能的合作。可解释的人工智能》通过解读人工智能决策背后的意图,解决了对责任和信念的迫切需求,同时也为实现可理解、合乎道德和可靠的人工智能结构指明了道路,彻底改变了人工智能推动决策的格局。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信