{"title":"Build confidence and acceptance of AI-based decision support systems - Explainable and liable AI","authors":"C. Nicodeme","doi":"10.1109/HSI49210.2020.9142668","DOIUrl":null,"url":null,"abstract":"Artificial Intelligence has known an incredible development since 2012. It was due to the impressive improvement of sensors, data quality and quantity, storage and computing capacity, etc. The promises AI offered led many scientific domains to implement AI-based decision support tool. However, despite numerous amazing results, very serious failures have raised Human mistrust, fear and scorn against AI. In Industries, staff members cannot afford to use tools that might fail them. This is especially true for Transportation operators where security and safety are at risk. Then, the question that arises is how to build Human confidence and acceptance of AI-based decision support system. In this paper, we combine different points of view to propose a structured overview of Transparency, Explicability and Interpretability, with new definitions arising as a consequence. Then we discuss the need for understandable information from the AI system, to legitimate or refute the tool's proposal. To conclude we offer ethical reflexions and ideas to develop confidence in AI.","PeriodicalId":371828,"journal":{"name":"2020 13th International Conference on Human System Interaction (HSI)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 13th International Conference on Human System Interaction (HSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HSI49210.2020.9142668","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
Abstract
Artificial Intelligence has known an incredible development since 2012. It was due to the impressive improvement of sensors, data quality and quantity, storage and computing capacity, etc. The promises AI offered led many scientific domains to implement AI-based decision support tool. However, despite numerous amazing results, very serious failures have raised Human mistrust, fear and scorn against AI. In Industries, staff members cannot afford to use tools that might fail them. This is especially true for Transportation operators where security and safety are at risk. Then, the question that arises is how to build Human confidence and acceptance of AI-based decision support system. In this paper, we combine different points of view to propose a structured overview of Transparency, Explicability and Interpretability, with new definitions arising as a consequence. Then we discuss the need for understandable information from the AI system, to legitimate or refute the tool's proposal. To conclude we offer ethical reflexions and ideas to develop confidence in AI.