{"title":"评估机器学习中的解释能力:一个关键的评论","authors":"Maissae Haddouchi, A. Berrado","doi":"10.1145/3289402.3289549","DOIUrl":null,"url":null,"abstract":"Interpretability of Machine Learning (ML) methods and models is a fundamental issue that concerns a wide range of data mining research. This topic is not only an academic concern, but a crucial aspect for public acceptance of ML in practical contexts as well. Indeed, one should know that the lack of interpretability can be a real drawback for various application areas, such as in healthcare, biology, sociology and industrial decision support systems. In fact, an algorithm, which does not give enough information about the learner process and the learned model would be merely discarded in favor of less accurate and more interpretable approaches. Several papers have been proposed to interpret efficient models, such as Neural Networks and Random Forest, but there is still no consensus about what interpretability refers to. Interestingly, the term has been associated with different notions depending on the point of view of each author, as well as the nature of the issue being treated and the users concerned by the explanation. Therefore, this paper primarily aims to provide a painstaking overview of the aspects related to interpretability of ML learning process and resulting models, as reported by the literature, and to organize the aforementioned aspects into metrics that can be used for ML Interpretability scoring.","PeriodicalId":199959,"journal":{"name":"Proceedings of the 12th International Conference on Intelligent Systems: Theories and Applications","volume":"279 3","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Assessing interpretation capacity in Machine Learning: A critical review\",\"authors\":\"Maissae Haddouchi, A. Berrado\",\"doi\":\"10.1145/3289402.3289549\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Interpretability of Machine Learning (ML) methods and models is a fundamental issue that concerns a wide range of data mining research. This topic is not only an academic concern, but a crucial aspect for public acceptance of ML in practical contexts as well. Indeed, one should know that the lack of interpretability can be a real drawback for various application areas, such as in healthcare, biology, sociology and industrial decision support systems. In fact, an algorithm, which does not give enough information about the learner process and the learned model would be merely discarded in favor of less accurate and more interpretable approaches. Several papers have been proposed to interpret efficient models, such as Neural Networks and Random Forest, but there is still no consensus about what interpretability refers to. Interestingly, the term has been associated with different notions depending on the point of view of each author, as well as the nature of the issue being treated and the users concerned by the explanation. Therefore, this paper primarily aims to provide a painstaking overview of the aspects related to interpretability of ML learning process and resulting models, as reported by the literature, and to organize the aforementioned aspects into metrics that can be used for ML Interpretability scoring.\",\"PeriodicalId\":199959,\"journal\":{\"name\":\"Proceedings of the 12th International Conference on Intelligent Systems: Theories and Applications\",\"volume\":\"279 3\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 12th International Conference on Intelligent Systems: Theories and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3289402.3289549\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 12th International Conference on Intelligent Systems: Theories and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3289402.3289549","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Assessing interpretation capacity in Machine Learning: A critical review
Interpretability of Machine Learning (ML) methods and models is a fundamental issue that concerns a wide range of data mining research. This topic is not only an academic concern, but a crucial aspect for public acceptance of ML in practical contexts as well. Indeed, one should know that the lack of interpretability can be a real drawback for various application areas, such as in healthcare, biology, sociology and industrial decision support systems. In fact, an algorithm, which does not give enough information about the learner process and the learned model would be merely discarded in favor of less accurate and more interpretable approaches. Several papers have been proposed to interpret efficient models, such as Neural Networks and Random Forest, but there is still no consensus about what interpretability refers to. Interestingly, the term has been associated with different notions depending on the point of view of each author, as well as the nature of the issue being treated and the users concerned by the explanation. Therefore, this paper primarily aims to provide a painstaking overview of the aspects related to interpretability of ML learning process and resulting models, as reported by the literature, and to organize the aforementioned aspects into metrics that can be used for ML Interpretability scoring.