The need for quantification of uncertainty in artificial intelligence for clinical data analysis: increasing the level of trust in the decision-making process
Moloud Abdar, A. Khosravi, Sheikh Mohammed Shariful Islam, Usha R. Acharya, A. Vasilakos
{"title":"The need for quantification of uncertainty in artificial intelligence for clinical data analysis: increasing the level of trust in the decision-making process","authors":"Moloud Abdar, A. Khosravi, Sheikh Mohammed Shariful Islam, Usha R. Acharya, A. Vasilakos","doi":"10.1109/msmc.2022.3150144","DOIUrl":null,"url":null,"abstract":"Different terms such as trust, certainty, and uncertainty are of great importance in the real world and play a critical role in artificial intelligence (AI) applications. The implied assumption is that the level of trust in AI can be measured in different ways. This principle can be achieved by distinguishing uncertainties in predicting AI methods used in medical studies. Hence, it is necessary to propose effective uncertainty quantification (UQ) and measurement methods to have trustworthy AI (TAI) clinical decision support systems (CDSSs). In this study, we present practical guidelines for developing and using UQ methods while applying various AI techniques for medical data analysis.","PeriodicalId":43649,"journal":{"name":"IEEE Systems Man and Cybernetics Magazine","volume":"3 1","pages":"28-40"},"PeriodicalIF":1.9000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Systems Man and Cybernetics Magazine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/msmc.2022.3150144","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 14
Abstract
Different terms such as trust, certainty, and uncertainty are of great importance in the real world and play a critical role in artificial intelligence (AI) applications. The implied assumption is that the level of trust in AI can be measured in different ways. This principle can be achieved by distinguishing uncertainties in predicting AI methods used in medical studies. Hence, it is necessary to propose effective uncertainty quantification (UQ) and measurement methods to have trustworthy AI (TAI) clinical decision support systems (CDSSs). In this study, we present practical guidelines for developing and using UQ methods while applying various AI techniques for medical data analysis.