{"title":"可解释的人工智能:透明决策的可解释模型","authors":"Kamlesh Kumawat","doi":"10.48047/ijfans/09/03/30","DOIUrl":null,"url":null,"abstract":"The quest for obvious and interpretable choice-making has turn out to be paramount in an technology ruled by way of the vast use of complicated AI structures. Explainable AI (XAI) emerges as a pivotal area addressing this vital want by using growing fashions and strategies that shed light at the enigmatic reasoning at the back of AI-pushed conclusions. This paper illuminates the Explainable AI landscape, defining its importance, strategies, and packages throughout a couple of domains. The dialogue moves through the coronary heart of XAI, elucidating its two factors: interpretable fashions and put up-hoc factors. In the previous, it investigates models which can be inherently designed for explicable results, such as decision timber or linear fashions. Meanwhile, the latter segment examines put up-modelling techniques which include function importance or SHAP values to decipher the underlying good judgment of black-box algorithms inclusive of neural networks. Furthermore, it surveys present day research efforts and forecasts future directions, imagining a path in which XAI now not best improves version transparency but additionally promotes human-AI collaboration. Explainable AI addresses the pressing need for accountability and believe by means of deciphering the intent behind AI decisions, at the same time as also charting a path towards understandable, ethical, and dependable AI structures, revolutionizing the landscape of AI-pushed decision-making.","PeriodicalId":290296,"journal":{"name":"International Journal of Food and Nutritional Sciences","volume":"42 30","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explainable AI: Interpretable Models for Transparent Decision-Making\",\"authors\":\"Kamlesh Kumawat\",\"doi\":\"10.48047/ijfans/09/03/30\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The quest for obvious and interpretable choice-making has turn out to be paramount in an technology ruled by way of the vast use of complicated AI structures. Explainable AI (XAI) emerges as a pivotal area addressing this vital want by using growing fashions and strategies that shed light at the enigmatic reasoning at the back of AI-pushed conclusions. This paper illuminates the Explainable AI landscape, defining its importance, strategies, and packages throughout a couple of domains. The dialogue moves through the coronary heart of XAI, elucidating its two factors: interpretable fashions and put up-hoc factors. In the previous, it investigates models which can be inherently designed for explicable results, such as decision timber or linear fashions. Meanwhile, the latter segment examines put up-modelling techniques which include function importance or SHAP values to decipher the underlying good judgment of black-box algorithms inclusive of neural networks. Furthermore, it surveys present day research efforts and forecasts future directions, imagining a path in which XAI now not best improves version transparency but additionally promotes human-AI collaboration. Explainable AI addresses the pressing need for accountability and believe by means of deciphering the intent behind AI decisions, at the same time as also charting a path towards understandable, ethical, and dependable AI structures, revolutionizing the landscape of AI-pushed decision-making.\",\"PeriodicalId\":290296,\"journal\":{\"name\":\"International Journal of Food and Nutritional Sciences\",\"volume\":\"42 30\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Food and Nutritional Sciences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.48047/ijfans/09/03/30\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Food and Nutritional Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48047/ijfans/09/03/30","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Explainable AI: Interpretable Models for Transparent Decision-Making
The quest for obvious and interpretable choice-making has turn out to be paramount in an technology ruled by way of the vast use of complicated AI structures. Explainable AI (XAI) emerges as a pivotal area addressing this vital want by using growing fashions and strategies that shed light at the enigmatic reasoning at the back of AI-pushed conclusions. This paper illuminates the Explainable AI landscape, defining its importance, strategies, and packages throughout a couple of domains. The dialogue moves through the coronary heart of XAI, elucidating its two factors: interpretable fashions and put up-hoc factors. In the previous, it investigates models which can be inherently designed for explicable results, such as decision timber or linear fashions. Meanwhile, the latter segment examines put up-modelling techniques which include function importance or SHAP values to decipher the underlying good judgment of black-box algorithms inclusive of neural networks. Furthermore, it surveys present day research efforts and forecasts future directions, imagining a path in which XAI now not best improves version transparency but additionally promotes human-AI collaboration. Explainable AI addresses the pressing need for accountability and believe by means of deciphering the intent behind AI decisions, at the same time as also charting a path towards understandable, ethical, and dependable AI structures, revolutionizing the landscape of AI-pushed decision-making.