Rushan Wang , Yanan Xin , Yatao Zhang , Fernando Perez-Cruz , Martin Raubal
{"title":"基于深度学习的交通预测反事实解释","authors":"Rushan Wang , Yanan Xin , Yatao Zhang , Fernando Perez-Cruz , Martin Raubal","doi":"10.1016/j.commtr.2025.100176","DOIUrl":null,"url":null,"abstract":"<div><div>Deep learning models are widely used in traffic forecasting and have achieved state-of-the-art prediction accuracy. However, their black-box nature presents challenges for interpretability and usability, particularly when predictions are significantly influenced by complex urban contextual features. This study aims to leverage an explainable artificial intelligence (AI) approach, counterfactual explanations, to enhance the explainability of deep learning-based traffic forecasting models and elucidate their relationships with various contextual features. We present a comprehensive framework that generates counterfactual explanations for traffic forecasting. The study first implements a graph convolutional network (GCN) to predict traffic speed based on historical traffic data and contextual variables. Counterfactual explanations are generated through a multi-objective optimization process, with four objectives, validity, proximity, sparsity, and plausibility, each emphasizing different aspects of optimization. We investigated the impact of contextual features on traffic speed prediction under varying spatial and temporal conditions. The scenario-driven counterfactual explanations integrate two types of user-defined constraints, directional and weighting constraints, to tailor the search for counterfactual explanations to specific use cases. These tailored explanations benefit machine learning practitioners who aim to understand the model's learning mechanisms and traffic domain experts who seek insights for necessity factors to alter traffic condition. The results showcase the effectiveness of counterfactual explanations in revealing traffic patterns learned by deep learning models and explaining the relationship between traffic prediction and contextual features, demonstrating its potential for interpreting black-box deep learning models.</div></div>","PeriodicalId":100292,"journal":{"name":"Communications in Transportation Research","volume":"5 ","pages":"Article 100176"},"PeriodicalIF":12.5000,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Counterfactual explanations for deep learning-based traffic forecasting\",\"authors\":\"Rushan Wang , Yanan Xin , Yatao Zhang , Fernando Perez-Cruz , Martin Raubal\",\"doi\":\"10.1016/j.commtr.2025.100176\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Deep learning models are widely used in traffic forecasting and have achieved state-of-the-art prediction accuracy. However, their black-box nature presents challenges for interpretability and usability, particularly when predictions are significantly influenced by complex urban contextual features. This study aims to leverage an explainable artificial intelligence (AI) approach, counterfactual explanations, to enhance the explainability of deep learning-based traffic forecasting models and elucidate their relationships with various contextual features. We present a comprehensive framework that generates counterfactual explanations for traffic forecasting. The study first implements a graph convolutional network (GCN) to predict traffic speed based on historical traffic data and contextual variables. Counterfactual explanations are generated through a multi-objective optimization process, with four objectives, validity, proximity, sparsity, and plausibility, each emphasizing different aspects of optimization. We investigated the impact of contextual features on traffic speed prediction under varying spatial and temporal conditions. The scenario-driven counterfactual explanations integrate two types of user-defined constraints, directional and weighting constraints, to tailor the search for counterfactual explanations to specific use cases. These tailored explanations benefit machine learning practitioners who aim to understand the model's learning mechanisms and traffic domain experts who seek insights for necessity factors to alter traffic condition. The results showcase the effectiveness of counterfactual explanations in revealing traffic patterns learned by deep learning models and explaining the relationship between traffic prediction and contextual features, demonstrating its potential for interpreting black-box deep learning models.</div></div>\",\"PeriodicalId\":100292,\"journal\":{\"name\":\"Communications in Transportation Research\",\"volume\":\"5 \",\"pages\":\"Article 100176\"},\"PeriodicalIF\":12.5000,\"publicationDate\":\"2025-04-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Communications in Transportation Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772424725000162\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"TRANSPORTATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications in Transportation Research","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772424725000162","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TRANSPORTATION","Score":null,"Total":0}
Counterfactual explanations for deep learning-based traffic forecasting
Deep learning models are widely used in traffic forecasting and have achieved state-of-the-art prediction accuracy. However, their black-box nature presents challenges for interpretability and usability, particularly when predictions are significantly influenced by complex urban contextual features. This study aims to leverage an explainable artificial intelligence (AI) approach, counterfactual explanations, to enhance the explainability of deep learning-based traffic forecasting models and elucidate their relationships with various contextual features. We present a comprehensive framework that generates counterfactual explanations for traffic forecasting. The study first implements a graph convolutional network (GCN) to predict traffic speed based on historical traffic data and contextual variables. Counterfactual explanations are generated through a multi-objective optimization process, with four objectives, validity, proximity, sparsity, and plausibility, each emphasizing different aspects of optimization. We investigated the impact of contextual features on traffic speed prediction under varying spatial and temporal conditions. The scenario-driven counterfactual explanations integrate two types of user-defined constraints, directional and weighting constraints, to tailor the search for counterfactual explanations to specific use cases. These tailored explanations benefit machine learning practitioners who aim to understand the model's learning mechanisms and traffic domain experts who seek insights for necessity factors to alter traffic condition. The results showcase the effectiveness of counterfactual explanations in revealing traffic patterns learned by deep learning models and explaining the relationship between traffic prediction and contextual features, demonstrating its potential for interpreting black-box deep learning models.