{"title":"Reinforced model-agnostic counterfactual explanations for recommender systems","authors":"Ao Chang, Qingxian Wang","doi":"10.1117/12.2682249","DOIUrl":null,"url":null,"abstract":"Explanation is an important requirement for transparent and trustworthy recommender systems. When the recommendation model itself is not explainable, an explanation must be generated post-hoc. In contrast to traditional post-hoc explanation methods, counterfactual methods can provide scrutable and actionable explanations with high fidelity. Existing counterfactual explanation methods for recommender systems are either not generalizable or face a huge search space. In this work, we propose a reinforcement learning counterfactual explanation method MACER (Model-Agnostic Counterfactual Explanations for Recommender Systems) which generates item-based explanations for recommender systems. We embed the discrete action space into a continuous space, making it possible to use the process of finding counterfactual explanations as a task of reinforcement learning. This method treats the recommender system as a black box (model-agnostic) and has no requirement on the type of recommender system, and thus is applicable to all recommendation systems.","PeriodicalId":177416,"journal":{"name":"Conference on Electronic Information Engineering and Data Processing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Conference on Electronic Information Engineering and Data Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2682249","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Explanation is an important requirement for transparent and trustworthy recommender systems. When the recommendation model itself is not explainable, an explanation must be generated post-hoc. In contrast to traditional post-hoc explanation methods, counterfactual methods can provide scrutable and actionable explanations with high fidelity. Existing counterfactual explanation methods for recommender systems are either not generalizable or face a huge search space. In this work, we propose a reinforcement learning counterfactual explanation method MACER (Model-Agnostic Counterfactual Explanations for Recommender Systems) which generates item-based explanations for recommender systems. We embed the discrete action space into a continuous space, making it possible to use the process of finding counterfactual explanations as a task of reinforcement learning. This method treats the recommender system as a black box (model-agnostic) and has no requirement on the type of recommender system, and thus is applicable to all recommendation systems.