X. Li, Xiaoguang Gao, Chenfeng Wang, Qianglong Wang
{"title":"A Feature Structure Based Interpretability Evaluation Approach for Deep Learning","authors":"X. Li, Xiaoguang Gao, Chenfeng Wang, Qianglong Wang","doi":"10.1109/ICCRE57112.2023.10155583","DOIUrl":null,"url":null,"abstract":"The shortcomings of deep learning in interpretability make it difficult to trust such complex black-box models in high-value decision problems. Nowadays, there is still no breakthrough in the research of deep learning interpretability, and people can not see the full picture inside the model. Meanwhile, there is no reliable and universal standard to evaluate the interpretability of deep learning model. Therefore, a deep learning interpretability evaluation method based on the feature structure of deep learning is proposed. Firstly, the trustworthiness evaluation is performed to confirm the robustness of the model with the help of Layer-wise relevance propagation. On this basis, the interpretability of the feature structure is measured based on the relevance between features and outputs. Experiments show that this method can effectively compare the interpretability of models.","PeriodicalId":285164,"journal":{"name":"2023 8th International Conference on Control and Robotics Engineering (ICCRE)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 8th International Conference on Control and Robotics Engineering (ICCRE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCRE57112.2023.10155583","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The shortcomings of deep learning in interpretability make it difficult to trust such complex black-box models in high-value decision problems. Nowadays, there is still no breakthrough in the research of deep learning interpretability, and people can not see the full picture inside the model. Meanwhile, there is no reliable and universal standard to evaluate the interpretability of deep learning model. Therefore, a deep learning interpretability evaluation method based on the feature structure of deep learning is proposed. Firstly, the trustworthiness evaluation is performed to confirm the robustness of the model with the help of Layer-wise relevance propagation. On this basis, the interpretability of the feature structure is measured based on the relevance between features and outputs. Experiments show that this method can effectively compare the interpretability of models.