Interpretable logical-probabilistic approximation of neural networks

IF 4.6 Q2 MATERIALS SCIENCE, BIOMATERIALS
Evgenii Vityaev , Alexey Korolev
{"title":"Interpretable logical-probabilistic approximation of neural networks","authors":"Evgenii Vityaev ,&nbsp;Alexey Korolev","doi":"10.1016/j.cogsys.2024.101301","DOIUrl":null,"url":null,"abstract":"<div><div>The paper proposes the approximation of DNNs by replacing each neuron by the corresponding logical-probabilistic neuron. Logical-probabilistic neurons learn their behavior based on the responses of initial neurons on incoming signals and discover all logical-probabilistic causal relationships between the input and output. These logical-probabilistic causal relationships are, in a certain sense, most precise – it was proved in the previous works that they are theoretically (when probability is known) can predict without contradictions. The resulting logical-probabilistic neurons are interconnected by the same connections as the initial neurons after replacing their signals on true/false. The resulting logical-probabilistic neural network produces its own predictions that approximate the predictions of the original DNN. Thus, we obtain an interpretable approximation of DNN, which also allows tracing of DNN by tracing its excitations through the causal relationships. This approximation of DNN is a Distillation method such as Model Translation, which train alternative smaller interpretable models that mimics the total input/output behavior of DNN. It is also locally interpretable and explains every particular prediction. It explains the sequences of logical probabilistic causal relationships that infer that prediction and also show all features that took part in this prediction with the statistical estimation of their significance. Experimental results on approximation accuracy of all intermedia neurons, output neurons and softmax output of DNN are presented, as well as the accuracy of obtained logical-probabilistic neural network. From the practical point of view, interpretable transformation of neural networks is very important for the hybrid artificial intelligent systems, where neural networks are integrated with the symbolic methods of AI. As a practical application we consider smart city.</div></div>","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389041724000950","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
引用次数: 0

Abstract

The paper proposes the approximation of DNNs by replacing each neuron by the corresponding logical-probabilistic neuron. Logical-probabilistic neurons learn their behavior based on the responses of initial neurons on incoming signals and discover all logical-probabilistic causal relationships between the input and output. These logical-probabilistic causal relationships are, in a certain sense, most precise – it was proved in the previous works that they are theoretically (when probability is known) can predict without contradictions. The resulting logical-probabilistic neurons are interconnected by the same connections as the initial neurons after replacing their signals on true/false. The resulting logical-probabilistic neural network produces its own predictions that approximate the predictions of the original DNN. Thus, we obtain an interpretable approximation of DNN, which also allows tracing of DNN by tracing its excitations through the causal relationships. This approximation of DNN is a Distillation method such as Model Translation, which train alternative smaller interpretable models that mimics the total input/output behavior of DNN. It is also locally interpretable and explains every particular prediction. It explains the sequences of logical probabilistic causal relationships that infer that prediction and also show all features that took part in this prediction with the statistical estimation of their significance. Experimental results on approximation accuracy of all intermedia neurons, output neurons and softmax output of DNN are presented, as well as the accuracy of obtained logical-probabilistic neural network. From the practical point of view, interpretable transformation of neural networks is very important for the hybrid artificial intelligent systems, where neural networks are integrated with the symbolic methods of AI. As a practical application we consider smart city.
神经网络的可解释逻辑概率逼近
本文提出用相应的逻辑-概率神经元替换每个神经元,从而近似 DNN。逻辑-概率神经元根据初始神经元对输入信号的反应来学习自己的行为,并发现输入和输出之间的所有逻辑-概率因果关系。从某种意义上说,这些逻辑-概率因果关系是最精确的--前人的研究已经证明,它们在理论上(当概率已知时)可以无矛盾地预测。逻辑概率神经元在替换了真/假信号后,与初始神经元通过相同的连接相互连接。由此产生的逻辑-概率神经网络所产生的预测结果近似于原始 DNN 的预测结果。这样,我们就得到了 DNN 的可解释近似值,还可以通过因果关系追踪 DNN 的激励。DNN 的这种近似方法是一种蒸馏法,如模型转换法,它可以训练替代的较小的可解释模型,从而模仿 DNN 的总输入/输出行为。它也是局部可解释的,并能解释每个特定的预测。它解释了推断该预测的逻辑概率因果关系序列,还显示了参与该预测的所有特征及其重要性的统计估计。实验结果显示了 DNN 所有中间神经元、输出神经元和软最大输出的近似精度,以及逻辑概率神经网络的精度。从实用的角度来看,神经网络的可解释变换对于混合人工智能系统非常重要,在混合人工智能系统中,神经网络与人工智能的符号方法相结合。作为实际应用,我们考虑了智能城市。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
ACS Applied Bio Materials
ACS Applied Bio Materials Chemistry-Chemistry (all)
CiteScore
9.40
自引率
2.10%
发文量
464
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信