Mahdi Hadj Ali, Yann Le Biannic, Pierre-Henri Wuillemin
{"title":"Interpreting Predictive Models through Causality: A Query-Driven Methodology","authors":"Mahdi Hadj Ali, Yann Le Biannic, Pierre-Henri Wuillemin","doi":"10.32473/flairs.36.133387","DOIUrl":null,"url":null,"abstract":"Machine learning algorithms have been widely adopted in recent years due to their efficiency and versatility across many fields. However, the complexity of predictive models has led to a lack of interpretability in automatic decision-making. Recent works have improved general interpretability by estimating the contributions of input features to the prediction of a pre-trained model. Despite these advancements, practitioners still seek to gain causal insights into the underlying data-generating mechanisms. To this end, some works have attempted to integrate causal knowledge into interpretability, as non-causal techniques can lead to paradoxical explanations. These efforts have provided answers to various queries, but relying on a single pre-trained model may result in quantification problems. In this paper, we argue that each causal query requires its own reasoning; thus, a single predictive model is not suited for all questions. Instead, we propose a new framework that prioritizes the query of interest and then derives a query-driven methodology accordingly to the structure of the causal model. It results in a tailored predictive model adapted to the query and an adapted interpretability technique. Specifically, it provides a numerical estimate of causal effects, which allows for accurate answers to explanatory questions when the causal structure is known.","PeriodicalId":302103,"journal":{"name":"The International FLAIRS Conference Proceedings","volume":"225 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The International FLAIRS Conference Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32473/flairs.36.133387","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Machine learning algorithms have been widely adopted in recent years due to their efficiency and versatility across many fields. However, the complexity of predictive models has led to a lack of interpretability in automatic decision-making. Recent works have improved general interpretability by estimating the contributions of input features to the prediction of a pre-trained model. Despite these advancements, practitioners still seek to gain causal insights into the underlying data-generating mechanisms. To this end, some works have attempted to integrate causal knowledge into interpretability, as non-causal techniques can lead to paradoxical explanations. These efforts have provided answers to various queries, but relying on a single pre-trained model may result in quantification problems. In this paper, we argue that each causal query requires its own reasoning; thus, a single predictive model is not suited for all questions. Instead, we propose a new framework that prioritizes the query of interest and then derives a query-driven methodology accordingly to the structure of the causal model. It results in a tailored predictive model adapted to the query and an adapted interpretability technique. Specifically, it provides a numerical estimate of causal effects, which allows for accurate answers to explanatory questions when the causal structure is known.