AKM Bahalul Haque, A. K. M. Najmul Islam, Patrick Mikalef
{"title":"Notion of Explainable Artificial Intelligence -- An Empirical Investigation from A Users Perspective","authors":"AKM Bahalul Haque, A. K. M. Najmul Islam, Patrick Mikalef","doi":"arxiv-2311.02102","DOIUrl":null,"url":null,"abstract":"The growing attention to artificial intelligence-based applications has led\nto research interest in explainability issues. This emerging research attention\non explainable AI (XAI) advocates the need to investigate end user-centric\nexplainable AI. Thus, this study aims to investigate usercentric explainable AI\nand considered recommendation systems as the study context. We conducted focus\ngroup interviews to collect qualitative data on the recommendation system. We\nasked participants about the end users' comprehension of a recommended item,\nits probable explanation, and their opinion of making a recommendation\nexplainable. Our findings reveal that end users want a non-technical and\ntailor-made explanation with on-demand supplementary information. Moreover, we\nalso observed users requiring an explanation about personal data usage,\ndetailed user feedback, and authentic and reliable explanations. Finally, we\npropose a synthesized framework that aims at involving the end user in the\ndevelopment process for requirements collection and validation.","PeriodicalId":501310,"journal":{"name":"arXiv - CS - Other Computer Science","volume":"2023 8","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Other Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2311.02102","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The growing attention to artificial intelligence-based applications has led
to research interest in explainability issues. This emerging research attention
on explainable AI (XAI) advocates the need to investigate end user-centric
explainable AI. Thus, this study aims to investigate usercentric explainable AI
and considered recommendation systems as the study context. We conducted focus
group interviews to collect qualitative data on the recommendation system. We
asked participants about the end users' comprehension of a recommended item,
its probable explanation, and their opinion of making a recommendation
explainable. Our findings reveal that end users want a non-technical and
tailor-made explanation with on-demand supplementary information. Moreover, we
also observed users requiring an explanation about personal data usage,
detailed user feedback, and authentic and reliable explanations. Finally, we
propose a synthesized framework that aims at involving the end user in the
development process for requirements collection and validation.