Tejaswani Verma, Christoph Lingenfelder, D. Klakow
{"title":"通过产生局部有意义的扰动来解释黑箱预测","authors":"Tejaswani Verma, Christoph Lingenfelder, D. Klakow","doi":"10.1109/TransAI51903.2021.00030","DOIUrl":null,"url":null,"abstract":"Generating explanations of predictions made by machine learning models is a difficult task, especially for black-box models. One possible way to explain an individual decision or recommendation for a given instance is to build an interpretable local surrogate for the underlying black-box model in the vicinity of the given instance. This approach has been adopted by many algorithms, for example LIME and LEAFAGE. These algorithms suffer from shortcomings, strict assumptions and prerequisites, which not only limit their applicability but also affect black-box fidelity of their local approximations. We present ways to overcome their shortcomings including the definition of neighborhood, removal of prerequisites and assumption of linearity in local model. The main contribution of this paper is a novel algorithm (LEMP) which provides explanation for the given instance by building a surrogate model using generated perturbations in the neighborhood of the given instance as training data. Experiments show that our approach is more widely applicable and generates interpretable models with better fidelity to the underlying black-box model than previous algorithms.","PeriodicalId":426766,"journal":{"name":"2021 Third International Conference on Transdisciplinary AI (TransAI)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Explaining Black-box Predictions by Generating Local Meaningful Perturbations\",\"authors\":\"Tejaswani Verma, Christoph Lingenfelder, D. Klakow\",\"doi\":\"10.1109/TransAI51903.2021.00030\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Generating explanations of predictions made by machine learning models is a difficult task, especially for black-box models. One possible way to explain an individual decision or recommendation for a given instance is to build an interpretable local surrogate for the underlying black-box model in the vicinity of the given instance. This approach has been adopted by many algorithms, for example LIME and LEAFAGE. These algorithms suffer from shortcomings, strict assumptions and prerequisites, which not only limit their applicability but also affect black-box fidelity of their local approximations. We present ways to overcome their shortcomings including the definition of neighborhood, removal of prerequisites and assumption of linearity in local model. The main contribution of this paper is a novel algorithm (LEMP) which provides explanation for the given instance by building a surrogate model using generated perturbations in the neighborhood of the given instance as training data. Experiments show that our approach is more widely applicable and generates interpretable models with better fidelity to the underlying black-box model than previous algorithms.\",\"PeriodicalId\":426766,\"journal\":{\"name\":\"2021 Third International Conference on Transdisciplinary AI (TransAI)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 Third International Conference on Transdisciplinary AI (TransAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TransAI51903.2021.00030\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Third International Conference on Transdisciplinary AI (TransAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TransAI51903.2021.00030","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Explaining Black-box Predictions by Generating Local Meaningful Perturbations
Generating explanations of predictions made by machine learning models is a difficult task, especially for black-box models. One possible way to explain an individual decision or recommendation for a given instance is to build an interpretable local surrogate for the underlying black-box model in the vicinity of the given instance. This approach has been adopted by many algorithms, for example LIME and LEAFAGE. These algorithms suffer from shortcomings, strict assumptions and prerequisites, which not only limit their applicability but also affect black-box fidelity of their local approximations. We present ways to overcome their shortcomings including the definition of neighborhood, removal of prerequisites and assumption of linearity in local model. The main contribution of this paper is a novel algorithm (LEMP) which provides explanation for the given instance by building a surrogate model using generated perturbations in the neighborhood of the given instance as training data. Experiments show that our approach is more widely applicable and generates interpretable models with better fidelity to the underlying black-box model than previous algorithms.