Hector Kohler, Quentin Delfosse, Paul Festor, Philippe Preux
{"title":"Towards a Research Community in Interpretable Reinforcement Learning: the InterpPol Workshop","authors":"Hector Kohler, Quentin Delfosse, Paul Festor, Philippe Preux","doi":"arxiv-2404.10906","DOIUrl":null,"url":null,"abstract":"Embracing the pursuit of intrinsically explainable reinforcement learning\nraises crucial questions: what distinguishes explainability from\ninterpretability? Should explainable and interpretable agents be developed\noutside of domains where transparency is imperative? What advantages do\ninterpretable policies offer over neural networks? How can we rigorously define\nand measure interpretability in policies, without user studies? What\nreinforcement learning paradigms,are the most suited to develop interpretable\nagents? Can Markov Decision Processes integrate interpretable state\nrepresentations? In addition to motivate an Interpretable RL community centered\naround the aforementioned questions, we propose the first venue dedicated to\nInterpretable RL: the InterpPol Workshop.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"183 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Symbolic Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2404.10906","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Embracing the pursuit of intrinsically explainable reinforcement learning
raises crucial questions: what distinguishes explainability from
interpretability? Should explainable and interpretable agents be developed
outside of domains where transparency is imperative? What advantages do
interpretable policies offer over neural networks? How can we rigorously define
and measure interpretability in policies, without user studies? What
reinforcement learning paradigms,are the most suited to develop interpretable
agents? Can Markov Decision Processes integrate interpretable state
representations? In addition to motivate an Interpretable RL community centered
around the aforementioned questions, we propose the first venue dedicated to
Interpretable RL: the InterpPol Workshop.