可解释机器学习方法的分类与综述

Saikat Das, Ph.D., Namita Agarwal, D. Venugopal, Frederick T. Sheldon, S. Shiva
{"title":"可解释机器学习方法的分类与综述","authors":"Saikat Das, Ph.D., Namita Agarwal, D. Venugopal, Frederick T. Sheldon, S. Shiva","doi":"10.1109/SSCI47803.2020.9308404","DOIUrl":null,"url":null,"abstract":"Since traditional machine learning (ML) techniques use black-box model, the internal operation of the classifier is unknown to human. Due to this black-box nature of the ML classifier, the trustworthiness of their predictions is sometimes questionable. Interpretable machine learning (IML) is a way of dissecting the ML classifiers to overcome this shortcoming and provide a more reasoned explanation of model predictions. In this paper, we explore several IML methods and their applications in various domains. Moreover, a detailed survey of IML methods along with identifying the essential building blocks of a black-box model is presented here. Herein, we have identified and described the requirements of IML methods and for completeness, a taxonomy of IML methods which classifies each into distinct groupings or sub-categories, is proposed. The goal, therefore, is to describe the state-of-the-art for IML methods and explain those in more concrete and understandable ways by providing better basis of knowledge for those building blocks and our associated requirements analysis.","PeriodicalId":413489,"journal":{"name":"2020 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Taxonomy and Survey of Interpretable Machine Learning Method\",\"authors\":\"Saikat Das, Ph.D., Namita Agarwal, D. Venugopal, Frederick T. Sheldon, S. Shiva\",\"doi\":\"10.1109/SSCI47803.2020.9308404\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Since traditional machine learning (ML) techniques use black-box model, the internal operation of the classifier is unknown to human. Due to this black-box nature of the ML classifier, the trustworthiness of their predictions is sometimes questionable. Interpretable machine learning (IML) is a way of dissecting the ML classifiers to overcome this shortcoming and provide a more reasoned explanation of model predictions. In this paper, we explore several IML methods and their applications in various domains. Moreover, a detailed survey of IML methods along with identifying the essential building blocks of a black-box model is presented here. Herein, we have identified and described the requirements of IML methods and for completeness, a taxonomy of IML methods which classifies each into distinct groupings or sub-categories, is proposed. The goal, therefore, is to describe the state-of-the-art for IML methods and explain those in more concrete and understandable ways by providing better basis of knowledge for those building blocks and our associated requirements analysis.\",\"PeriodicalId\":413489,\"journal\":{\"name\":\"2020 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSCI47803.2020.9308404\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Symposium Series on Computational Intelligence (SSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSCI47803.2020.9308404","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

由于传统的机器学习技术使用的是黑盒模型,分类器的内部运作对人类来说是未知的。由于ML分类器的这种黑箱性质,其预测的可信度有时值得怀疑。可解释机器学习(IML)是一种剖析机器学习分类器的方法,以克服这一缺点,并为模型预测提供更合理的解释。在本文中,我们探讨了几种IML方法及其在各个领域中的应用。此外,本文还详细介绍了IML方法,并确定了黑盒模型的基本构建块。在这里,我们已经确定并描述了IML方法的要求,并且为了完整性,提出了IML方法的分类法,该分类法将每种方法分类为不同的分组或子类别。因此,我们的目标是描述IML方法的最新技术,并通过为这些构建块和相关需求分析提供更好的知识基础,以更具体和可理解的方式解释这些方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Taxonomy and Survey of Interpretable Machine Learning Method
Since traditional machine learning (ML) techniques use black-box model, the internal operation of the classifier is unknown to human. Due to this black-box nature of the ML classifier, the trustworthiness of their predictions is sometimes questionable. Interpretable machine learning (IML) is a way of dissecting the ML classifiers to overcome this shortcoming and provide a more reasoned explanation of model predictions. In this paper, we explore several IML methods and their applications in various domains. Moreover, a detailed survey of IML methods along with identifying the essential building blocks of a black-box model is presented here. Herein, we have identified and described the requirements of IML methods and for completeness, a taxonomy of IML methods which classifies each into distinct groupings or sub-categories, is proposed. The goal, therefore, is to describe the state-of-the-art for IML methods and explain those in more concrete and understandable ways by providing better basis of knowledge for those building blocks and our associated requirements analysis.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信