{"title":"Explaining Machine Learning Predictions: A Case Study","authors":"Prarthana Dutta, Naresh Babu Muppalaneni","doi":"10.1109/TEECCON54414.2022.9854821","DOIUrl":null,"url":null,"abstract":"The growing trends and demands for Artificial Intelligence in various domains due to their excellent performance and generalization ability are known to all. These decisions affect the population in general as they usually deal with sensitive tasks in various fields such as healthcare, education, transportation, etc. Hence, understanding these learned representations would add more descriptive knowledge to better interpret the decisions with the ground truth. The European General Data Protection Regulation reserves the right to receive an explanation against a model producing an automated decision. Understanding the decisions would validate the model behavior, ensure trust, and deal with the risk associated with the model. Upon analyzing the relevant features, we can decide whether the model predictions could be trusted or not in the future. We can further try to reduce the misclassification rate by rectifying the features (of the misclassified instances) if needed. In this way, we can peek into the black-box and gain insight into a model’s prediction, thus understanding the learned representations. In pursuit of this objective, a common approach would be to devise an explanatory model that would explain the predictions made by a model and further analyze those predictions with the ground truth information. We initiated a case study on a diabetes risk prediction dataset by understanding local predictions made by five different Machine Learning models and trying to provide explanations for the misclassified instances.","PeriodicalId":251455,"journal":{"name":"2022 Trends in Electrical, Electronics, Computer Engineering Conference (TEECCON)","volume":"11 2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Trends in Electrical, Electronics, Computer Engineering Conference (TEECCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TEECCON54414.2022.9854821","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The growing trends and demands for Artificial Intelligence in various domains due to their excellent performance and generalization ability are known to all. These decisions affect the population in general as they usually deal with sensitive tasks in various fields such as healthcare, education, transportation, etc. Hence, understanding these learned representations would add more descriptive knowledge to better interpret the decisions with the ground truth. The European General Data Protection Regulation reserves the right to receive an explanation against a model producing an automated decision. Understanding the decisions would validate the model behavior, ensure trust, and deal with the risk associated with the model. Upon analyzing the relevant features, we can decide whether the model predictions could be trusted or not in the future. We can further try to reduce the misclassification rate by rectifying the features (of the misclassified instances) if needed. In this way, we can peek into the black-box and gain insight into a model’s prediction, thus understanding the learned representations. In pursuit of this objective, a common approach would be to devise an explanatory model that would explain the predictions made by a model and further analyze those predictions with the ground truth information. We initiated a case study on a diabetes risk prediction dataset by understanding local predictions made by five different Machine Learning models and trying to provide explanations for the misclassified instances.