{"title":"神经网络的可解释逻辑概率逼近","authors":"Evgenii Vityaev , Alexey Korolev","doi":"10.1016/j.cogsys.2024.101301","DOIUrl":null,"url":null,"abstract":"<div><div>The paper proposes the approximation of DNNs by replacing each neuron by the corresponding logical-probabilistic neuron. Logical-probabilistic neurons learn their behavior based on the responses of initial neurons on incoming signals and discover all logical-probabilistic causal relationships between the input and output. These logical-probabilistic causal relationships are, in a certain sense, most precise – it was proved in the previous works that they are theoretically (when probability is known) can predict without contradictions. The resulting logical-probabilistic neurons are interconnected by the same connections as the initial neurons after replacing their signals on true/false. The resulting logical-probabilistic neural network produces its own predictions that approximate the predictions of the original DNN. Thus, we obtain an interpretable approximation of DNN, which also allows tracing of DNN by tracing its excitations through the causal relationships. This approximation of DNN is a Distillation method such as Model Translation, which train alternative smaller interpretable models that mimics the total input/output behavior of DNN. It is also locally interpretable and explains every particular prediction. It explains the sequences of logical probabilistic causal relationships that infer that prediction and also show all features that took part in this prediction with the statistical estimation of their significance. Experimental results on approximation accuracy of all intermedia neurons, output neurons and softmax output of DNN are presented, as well as the accuracy of obtained logical-probabilistic neural network. From the practical point of view, interpretable transformation of neural networks is very important for the hybrid artificial intelligent systems, where neural networks are integrated with the symbolic methods of AI. As a practical application we consider smart city.</div></div>","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Interpretable logical-probabilistic approximation of neural networks\",\"authors\":\"Evgenii Vityaev , Alexey Korolev\",\"doi\":\"10.1016/j.cogsys.2024.101301\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The paper proposes the approximation of DNNs by replacing each neuron by the corresponding logical-probabilistic neuron. Logical-probabilistic neurons learn their behavior based on the responses of initial neurons on incoming signals and discover all logical-probabilistic causal relationships between the input and output. These logical-probabilistic causal relationships are, in a certain sense, most precise – it was proved in the previous works that they are theoretically (when probability is known) can predict without contradictions. The resulting logical-probabilistic neurons are interconnected by the same connections as the initial neurons after replacing their signals on true/false. The resulting logical-probabilistic neural network produces its own predictions that approximate the predictions of the original DNN. Thus, we obtain an interpretable approximation of DNN, which also allows tracing of DNN by tracing its excitations through the causal relationships. This approximation of DNN is a Distillation method such as Model Translation, which train alternative smaller interpretable models that mimics the total input/output behavior of DNN. It is also locally interpretable and explains every particular prediction. It explains the sequences of logical probabilistic causal relationships that infer that prediction and also show all features that took part in this prediction with the statistical estimation of their significance. Experimental results on approximation accuracy of all intermedia neurons, output neurons and softmax output of DNN are presented, as well as the accuracy of obtained logical-probabilistic neural network. From the practical point of view, interpretable transformation of neural networks is very important for the hybrid artificial intelligent systems, where neural networks are integrated with the symbolic methods of AI. As a practical application we consider smart city.</div></div>\",\"PeriodicalId\":2,\"journal\":{\"name\":\"ACS Applied Bio Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-10-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Bio Materials\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1389041724000950\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, BIOMATERIALS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389041724000950","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
Interpretable logical-probabilistic approximation of neural networks
The paper proposes the approximation of DNNs by replacing each neuron by the corresponding logical-probabilistic neuron. Logical-probabilistic neurons learn their behavior based on the responses of initial neurons on incoming signals and discover all logical-probabilistic causal relationships between the input and output. These logical-probabilistic causal relationships are, in a certain sense, most precise – it was proved in the previous works that they are theoretically (when probability is known) can predict without contradictions. The resulting logical-probabilistic neurons are interconnected by the same connections as the initial neurons after replacing their signals on true/false. The resulting logical-probabilistic neural network produces its own predictions that approximate the predictions of the original DNN. Thus, we obtain an interpretable approximation of DNN, which also allows tracing of DNN by tracing its excitations through the causal relationships. This approximation of DNN is a Distillation method such as Model Translation, which train alternative smaller interpretable models that mimics the total input/output behavior of DNN. It is also locally interpretable and explains every particular prediction. It explains the sequences of logical probabilistic causal relationships that infer that prediction and also show all features that took part in this prediction with the statistical estimation of their significance. Experimental results on approximation accuracy of all intermedia neurons, output neurons and softmax output of DNN are presented, as well as the accuracy of obtained logical-probabilistic neural network. From the practical point of view, interpretable transformation of neural networks is very important for the hybrid artificial intelligent systems, where neural networks are integrated with the symbolic methods of AI. As a practical application we consider smart city.