Siniša M. Arsić, Marko M. Mihić, Dejan Petrović, Zorica M. Mitrović, S. Kostić, O. Mihic
{"title":"Review of measures for improving ML model interpretability: Empowering decision makers with transparent insights","authors":"Siniša M. Arsić, Marko M. Mihić, Dejan Petrović, Zorica M. Mitrović, S. Kostić, O. Mihic","doi":"10.1109/ACDSA59508.2024.10467907","DOIUrl":null,"url":null,"abstract":"This paper investigates actionable measures to enhance the interpretability of machine learning models, addressing the critical need for transparency in decision-making processes. By proposing and briefly comparing specific measures, this paper aims to empower common knowledge with clearer insights into model predictions, fostering trust and understanding. Theoretical findings and overall discussion encompass techniques for model explanation, feature importance, and interpretability tools, offering a comprehensive guide for practitioners seeking to clarify the black box nature of machine learning outputs. Findings suggest three methods for improving model interpretability. The outlined approaches prioritize real-world applicability, enabling managers to make informed decisions with confidence.","PeriodicalId":518964,"journal":{"name":"2024 International Conference on Artificial Intelligence, Computer, Data Sciences and Applications (ACDSA)","volume":"798 1","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024 International Conference on Artificial Intelligence, Computer, Data Sciences and Applications (ACDSA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACDSA59508.2024.10467907","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper investigates actionable measures to enhance the interpretability of machine learning models, addressing the critical need for transparency in decision-making processes. By proposing and briefly comparing specific measures, this paper aims to empower common knowledge with clearer insights into model predictions, fostering trust and understanding. Theoretical findings and overall discussion encompass techniques for model explanation, feature importance, and interpretability tools, offering a comprehensive guide for practitioners seeking to clarify the black box nature of machine learning outputs. Findings suggest three methods for improving model interpretability. The outlined approaches prioritize real-world applicability, enabling managers to make informed decisions with confidence.