Sebastian Ordyniak, Giacomo Paesani, Mateusz Rychlicki, Stefan Szeider
{"title":"Explaining Decisions in ML Models: a Parameterized Complexity Analysis","authors":"Sebastian Ordyniak, Giacomo Paesani, Mateusz Rychlicki, Stefan Szeider","doi":"arxiv-2407.15780","DOIUrl":null,"url":null,"abstract":"This paper presents a comprehensive theoretical investigation into the\nparameterized complexity of explanation problems in various machine learning\n(ML) models. Contrary to the prevalent black-box perception, our study focuses\non models with transparent internal mechanisms. We address two principal types\nof explanation problems: abductive and contrastive, both in their local and\nglobal variants. Our analysis encompasses diverse ML models, including Decision\nTrees, Decision Sets, Decision Lists, Ordered Binary Decision Diagrams, Random\nForests, and Boolean Circuits, and ensembles thereof, each offering unique\nexplanatory challenges. This research fills a significant gap in explainable AI\n(XAI) by providing a foundational understanding of the complexities of\ngenerating explanations for these models. This work provides insights vital for\nfurther research in the domain of XAI, contributing to the broader discourse on\nthe necessity of transparency and accountability in AI systems.","PeriodicalId":501024,"journal":{"name":"arXiv - CS - Computational Complexity","volume":"82 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computational Complexity","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.15780","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents a comprehensive theoretical investigation into the
parameterized complexity of explanation problems in various machine learning
(ML) models. Contrary to the prevalent black-box perception, our study focuses
on models with transparent internal mechanisms. We address two principal types
of explanation problems: abductive and contrastive, both in their local and
global variants. Our analysis encompasses diverse ML models, including Decision
Trees, Decision Sets, Decision Lists, Ordered Binary Decision Diagrams, Random
Forests, and Boolean Circuits, and ensembles thereof, each offering unique
explanatory challenges. This research fills a significant gap in explainable AI
(XAI) by providing a foundational understanding of the complexities of
generating explanations for these models. This work provides insights vital for
further research in the domain of XAI, contributing to the broader discourse on
the necessity of transparency and accountability in AI systems.
本文对各种机器学习(ML)模型中解释问题的参数化复杂性进行了全面的理论研究。与盛行的黑箱观念相反,我们的研究侧重于内部机制透明的模型。我们研究了两种主要类型的解释问题:归纳和对比,包括局部和全局变体。我们的分析涵盖多种 ML 模型,包括决策树、决策集、决策列表、有序二元决策图、随机森林和布尔电路,以及它们的集合,每种模型都提出了独特的解释性挑战。这项研究填补了可解释人工智能(XAI)领域的重大空白,提供了对这些模型生成解释的复杂性的基础性理解。这项工作为 XAI 领域的进一步研究提供了至关重要的见解,为更广泛地讨论人工智能系统中透明度和问责制的必要性做出了贡献。