Maël Guillemé, Véronique Masson, L. Rozé, A. Termier
{"title":"时间序列分类的不可知论局部解释","authors":"Maël Guillemé, Véronique Masson, L. Rozé, A. Termier","doi":"10.1109/ICTAI.2019.00067","DOIUrl":null,"url":null,"abstract":"Recent advances in Machine Learning (such as Deep Learning) have brought tremendous gains in classification accuracy. However, these approaches build complex non-linear models, making the resulting predictions difficult to interpret for humans. The field of model interpretability has therefore recently emerged, aiming to address this issue by designing methods to explain a posteriori the predictions of complex learners. Interpretability frameworks such as LIME and SHAP have been proposed for tabular, image and text data. Nowadays, with the advent of the Internet of Things and of pervasive monitoring, time-series have become ubiquitous and their classification is a crucial task in many application domains. Like in other data domains, state-of-the-art time-series classifiers rely on complex models and typically do not provide intuitive and easily interpretable outputs, yet no interpretability framework had so far been proposed for this type of data. In this paper, we propose the first agnostic Local Explainer For TIme Series classificaTion (LEFTIST). LEFTIST provides explanations for predictions made by any time series classifier. Our thorough experiments on synthetic and real-world datasets show that the explanations provided by LEFTIST are at once faithful to the classification model and understandable by human users.","PeriodicalId":346657,"journal":{"name":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"27","resultStr":"{\"title\":\"Agnostic Local Explanation for Time Series Classification\",\"authors\":\"Maël Guillemé, Véronique Masson, L. Rozé, A. Termier\",\"doi\":\"10.1109/ICTAI.2019.00067\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent advances in Machine Learning (such as Deep Learning) have brought tremendous gains in classification accuracy. However, these approaches build complex non-linear models, making the resulting predictions difficult to interpret for humans. The field of model interpretability has therefore recently emerged, aiming to address this issue by designing methods to explain a posteriori the predictions of complex learners. Interpretability frameworks such as LIME and SHAP have been proposed for tabular, image and text data. Nowadays, with the advent of the Internet of Things and of pervasive monitoring, time-series have become ubiquitous and their classification is a crucial task in many application domains. Like in other data domains, state-of-the-art time-series classifiers rely on complex models and typically do not provide intuitive and easily interpretable outputs, yet no interpretability framework had so far been proposed for this type of data. In this paper, we propose the first agnostic Local Explainer For TIme Series classificaTion (LEFTIST). LEFTIST provides explanations for predictions made by any time series classifier. Our thorough experiments on synthetic and real-world datasets show that the explanations provided by LEFTIST are at once faithful to the classification model and understandable by human users.\",\"PeriodicalId\":346657,\"journal\":{\"name\":\"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"27\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTAI.2019.00067\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI.2019.00067","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Agnostic Local Explanation for Time Series Classification
Recent advances in Machine Learning (such as Deep Learning) have brought tremendous gains in classification accuracy. However, these approaches build complex non-linear models, making the resulting predictions difficult to interpret for humans. The field of model interpretability has therefore recently emerged, aiming to address this issue by designing methods to explain a posteriori the predictions of complex learners. Interpretability frameworks such as LIME and SHAP have been proposed for tabular, image and text data. Nowadays, with the advent of the Internet of Things and of pervasive monitoring, time-series have become ubiquitous and their classification is a crucial task in many application domains. Like in other data domains, state-of-the-art time-series classifiers rely on complex models and typically do not provide intuitive and easily interpretable outputs, yet no interpretability framework had so far been proposed for this type of data. In this paper, we propose the first agnostic Local Explainer For TIme Series classificaTion (LEFTIST). LEFTIST provides explanations for predictions made by any time series classifier. Our thorough experiments on synthetic and real-world datasets show that the explanations provided by LEFTIST are at once faithful to the classification model and understandable by human users.