Jonas Wanner, L. Herm, K. Heinrich, Christian Janiesch
{"title":"A social evaluation of the perceived goodness of explainability in machine learning","authors":"Jonas Wanner, L. Herm, K. Heinrich, Christian Janiesch","doi":"10.1080/2573234X.2021.1952913","DOIUrl":null,"url":null,"abstract":"ABSTRACT Machine learning in decision support systems already outperforms pre-existing statistical methods. However, their predictions face challenges as calculations are often complex and not all model predictions are traceable. In fact, many well-performing models are black boxes to the user who– consequently– cannot interpret and understand the rationale behind a model’s prediction. Explainable artificial intelligence has emerged as a field of study to counteract this. However, current research often neglects the human factor. Against this backdrop, we derived and examined factors that influence the goodness of a model’s explainability in a social evaluation of end users. We implemented six common ML algorithms for four different benchmark datasets in a two-factor factorial design and asked potential end users to rate different factors in a survey. Our results show that the perceived goodness of explainability is moderated by the problem type and strongly correlates with trustworthiness as the most important factor.","PeriodicalId":36417,"journal":{"name":"Journal of Business Analytics","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2021-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Business Analytics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/2573234X.2021.1952913","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 6
Abstract
ABSTRACT Machine learning in decision support systems already outperforms pre-existing statistical methods. However, their predictions face challenges as calculations are often complex and not all model predictions are traceable. In fact, many well-performing models are black boxes to the user who– consequently– cannot interpret and understand the rationale behind a model’s prediction. Explainable artificial intelligence has emerged as a field of study to counteract this. However, current research often neglects the human factor. Against this backdrop, we derived and examined factors that influence the goodness of a model’s explainability in a social evaluation of end users. We implemented six common ML algorithms for four different benchmark datasets in a two-factor factorial design and asked potential end users to rate different factors in a survey. Our results show that the perceived goodness of explainability is moderated by the problem type and strongly correlates with trustworthiness as the most important factor.