A METHOD FOR EVALUATING EXPLANATIONS IN AN ARTIFICIAL INTELLIGENCE SYSTEM USING POSSIBILITY THEORY

S. Chalyi, V. Leshchynskyi
{"title":"A METHOD FOR EVALUATING EXPLANATIONS IN AN ARTIFICIAL INTELLIGENCE SYSTEM USING POSSIBILITY THEORY","authors":"S. Chalyi, V. Leshchynskyi","doi":"10.20998/2079-0023.2023.02.14","DOIUrl":null,"url":null,"abstract":"The subject of the research is the process of generating explanations for the decision of an artificial intelligence system. Explanations are used to help the user understand the process of reaching the result and to be able to use an intelligent information system more effectively to make practical decisions for him or her. The purpose of this paper is to develop a method for evaluating explanations taking into account differences in input data and the corresponding decision of an artificial intelligence system. The solution of this problem makes it possible to evaluate the relevance of the explanation for the internal decision-making mechanism in an intelligent information system, regardless of the user's level of knowledge about the peculiarities of making and using such a decision. To achieve this goal, the following tasks are solved: structuring the evaluation of explanations depending on their level of detail, taking into account their compliance with the decision-making process in an intelligent system and the level of perception of the user of such a system; developing a method for evaluating explanations based on their compliance with the decision-making process in an intelligent system. Conclusions. The article structures the evaluation of explanations according to their level of detail. The levels of associative dependencies, precedents, causal dependencies and interactive dependencies are identified, which determine different levels of detail of explanations. It is shown that the associative and causal levels of detail of explanations can be assessed using numerical, probabilistic, or possibilistic indicators. The precedent and interactive levels require a subjective assessment based on a survey of users of the artificial intelligence system. The article develops a method for the possible assessment of the relevance of explanations for the decision-making process in an intelligent system, taking into account the dependencies between the input data and the decision of the intelligent system. The method includes the stages of assessing the sensitivity, correctness and complexity of the explanation based on a comparison of the values and quantity of the input data used in the explanation. The method makes it possible to comprehensively evaluate the explanation in terms of resistance to insignificant changes in the input data, relevance of the explanation to the result obtained, and complexity of the explanation calculation. In terms of practical application, the method makes it possible to minimize the number of input variables for the explanation while satisfying the sensitivity constraint of the explanation, which creates conditions for more efficient formation of the interpretation based on the use of a subset of key input variables that have a significant impact on the decision obtained by the intelligent system.","PeriodicalId":391969,"journal":{"name":"Bulletin of National Technical University \"KhPI\". Series: System Analysis, Control and Information Technologies","volume":" 1287","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Bulletin of National Technical University \"KhPI\". Series: System Analysis, Control and Information Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.20998/2079-0023.2023.02.14","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The subject of the research is the process of generating explanations for the decision of an artificial intelligence system. Explanations are used to help the user understand the process of reaching the result and to be able to use an intelligent information system more effectively to make practical decisions for him or her. The purpose of this paper is to develop a method for evaluating explanations taking into account differences in input data and the corresponding decision of an artificial intelligence system. The solution of this problem makes it possible to evaluate the relevance of the explanation for the internal decision-making mechanism in an intelligent information system, regardless of the user's level of knowledge about the peculiarities of making and using such a decision. To achieve this goal, the following tasks are solved: structuring the evaluation of explanations depending on their level of detail, taking into account their compliance with the decision-making process in an intelligent system and the level of perception of the user of such a system; developing a method for evaluating explanations based on their compliance with the decision-making process in an intelligent system. Conclusions. The article structures the evaluation of explanations according to their level of detail. The levels of associative dependencies, precedents, causal dependencies and interactive dependencies are identified, which determine different levels of detail of explanations. It is shown that the associative and causal levels of detail of explanations can be assessed using numerical, probabilistic, or possibilistic indicators. The precedent and interactive levels require a subjective assessment based on a survey of users of the artificial intelligence system. The article develops a method for the possible assessment of the relevance of explanations for the decision-making process in an intelligent system, taking into account the dependencies between the input data and the decision of the intelligent system. The method includes the stages of assessing the sensitivity, correctness and complexity of the explanation based on a comparison of the values and quantity of the input data used in the explanation. The method makes it possible to comprehensively evaluate the explanation in terms of resistance to insignificant changes in the input data, relevance of the explanation to the result obtained, and complexity of the explanation calculation. In terms of practical application, the method makes it possible to minimize the number of input variables for the explanation while satisfying the sensitivity constraint of the explanation, which creates conditions for more efficient formation of the interpretation based on the use of a subset of key input variables that have a significant impact on the decision obtained by the intelligent system.
一种利用可能性理论评估人工智能系统中解释的方法
这项研究的主题是为人工智能系统的决策生成解释的过程。解释用于帮助用户理解得出结果的过程,使其能够更有效地使用智能信息系统为自己做出实际决策。本文旨在开发一种评估说明的方法,其中考虑到输入数据和人工智能系统相应决策的差异。解决了这个问题,就有可能评估解释对智能信息系统内部决策机制的相关性,而不管用户对做出和使用这种决策的特殊性的了解程度如何。为了实现这一目标,我们需要完成以下任务:根据解释的详细程度,对解释进行结构化评估,同时考虑到解释是否符合智能系统的决策过程以及智能系统用户的认知水平;根据解释是否符合智能系统的决策过程,制定评估解释的方法。结论。文章根据解释的详细程度对解释进行评估。文章确定了关联依赖、先例、因果依赖和交互依赖的层次,这些层次决定了解释的不同详细程度。研究表明,解释的关联性和因果性细节层次可以使用数字、概率或可能性指标进行评估。而先例和交互层次则需要根据对人工智能系统用户的调查进行主观评估。考虑到输入数据与智能系统决策之间的依赖关系,文章开发了一种方法,用于评估解释对智能系统决策过程的相关性。该方法包括在比较解释中使用的输入数据的值和数量的基础上,评估解释的敏感性、正确性和复杂性。该方法可以从对输入数据微小变化的抵抗力、解释与所获结果的相关性以及解释计算的复杂性等方面对解释进行综合评估。在实际应用方面,该方法可以在满足解释灵敏度约束的前提下,最大限度地减少解释的输入变量数量,从而为在使用对智能系统决策有重大影响的关键输入变量子集的基础上更有效地形成解释创造了条件。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信