Inga Ibs , Claire Ott , Frank Jäkel, Constantin A. Rothkopf
{"title":"From human explanations to explainable AI: Insights from constrained optimization","authors":"Inga Ibs , Claire Ott , Frank Jäkel, Constantin A. Rothkopf","doi":"10.1016/j.cogsys.2024.101297","DOIUrl":null,"url":null,"abstract":"<div><div>Many complex decision-making scenarios encountered in the real-world, including energy systems and infrastructure planning, can be formulated as constrained optimization problems. Solutions for these problems are often obtained using white-box solvers based on linear program representations. Even though these algorithms are well understood and the optimality of the solution is guaranteed, explanations for the solutions are still necessary to build trust and ensure the implementation of policies. Solution algorithms represent the problem in a high-dimensional abstract space, which does not translate well to intuitive explanations for lay people. Here, we report three studies in which we pose constrained optimization problems in the form of a computer game to participants. In the game, called Furniture Factory, participants manage a company that produces furniture. In two qualitative studies, we first elicit representations and heuristics with concurrent explanations and validate their use in post-hoc explanations. We analyze the complexity of the explanations given by participants to gain a deeper understanding of how complex cognitively adequate explanations should be. Based on insights from the analysis of the two qualitative studies, we formalize strategies that in combination can act as descriptors for participants’ behavior and optimal solutions. We match the strategies to decisions in a large behavioral dataset (<span><math><mrow><mo>></mo><mn>150</mn></mrow></math></span> participants) gathered in a third study, and compare the complexity of strategy combinations to the complexity featured in participants’ explanations. Based on the analyses from these three studies, we discuss how these insights can inform the automatic generation of cognitively adequate explanations in future AI systems.</div></div>","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389041724000913","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
引用次数: 0
Abstract
Many complex decision-making scenarios encountered in the real-world, including energy systems and infrastructure planning, can be formulated as constrained optimization problems. Solutions for these problems are often obtained using white-box solvers based on linear program representations. Even though these algorithms are well understood and the optimality of the solution is guaranteed, explanations for the solutions are still necessary to build trust and ensure the implementation of policies. Solution algorithms represent the problem in a high-dimensional abstract space, which does not translate well to intuitive explanations for lay people. Here, we report three studies in which we pose constrained optimization problems in the form of a computer game to participants. In the game, called Furniture Factory, participants manage a company that produces furniture. In two qualitative studies, we first elicit representations and heuristics with concurrent explanations and validate their use in post-hoc explanations. We analyze the complexity of the explanations given by participants to gain a deeper understanding of how complex cognitively adequate explanations should be. Based on insights from the analysis of the two qualitative studies, we formalize strategies that in combination can act as descriptors for participants’ behavior and optimal solutions. We match the strategies to decisions in a large behavioral dataset ( participants) gathered in a third study, and compare the complexity of strategy combinations to the complexity featured in participants’ explanations. Based on the analyses from these three studies, we discuss how these insights can inform the automatic generation of cognitively adequate explanations in future AI systems.