{"title":"Extracting rules from neural networks as decision diagrams.","authors":"Jan Chorowski, Jacek M Zurada","doi":"10.1109/TNN.2011.2106163","DOIUrl":null,"url":null,"abstract":"<p><p>Rule extraction from neural networks (NNs) solves two fundamental problems: it gives insight into the logic behind the network and in many cases, it improves the network's ability to generalize the acquired knowledge. This paper presents a novel eclectic approach to rule extraction from NNs, named LOcal Rule Extraction (LORE), suited for multilayer perceptron networks with discrete (logical or categorical) inputs. The extracted rules mimic network behavior on the training set and relax this condition on the remaining input space. First, a multilayer perceptron network is trained under standard regime. It is then transformed into an equivalent form, returning the same numerical result as the original network, yet being able to produce rules generalizing the network output for cases similar to a given input. The partial rules extracted for every training set sample are then merged to form a decision diagram (DD) from which logic rules can be extracted. A rule format explicitly separating subsets of inputs for which an answer is known from those with an undetermined answer is presented. A special data structure, the decision diagram, allowing efficient partial rule merging is introduced. With regard to rules' complexity and generalization abilities, LORE gives results comparable to those reported previously. An algorithm transforming DDs into interpretable boolean expressions is described. Experimental running times of rule extraction are proportional to the network's training time.</p>","PeriodicalId":13434,"journal":{"name":"IEEE transactions on neural networks","volume":"22 12","pages":"2435-46"},"PeriodicalIF":0.0000,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TNN.2011.2106163","citationCount":"52","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TNN.2011.2106163","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2011/2/17 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 52
Abstract
Rule extraction from neural networks (NNs) solves two fundamental problems: it gives insight into the logic behind the network and in many cases, it improves the network's ability to generalize the acquired knowledge. This paper presents a novel eclectic approach to rule extraction from NNs, named LOcal Rule Extraction (LORE), suited for multilayer perceptron networks with discrete (logical or categorical) inputs. The extracted rules mimic network behavior on the training set and relax this condition on the remaining input space. First, a multilayer perceptron network is trained under standard regime. It is then transformed into an equivalent form, returning the same numerical result as the original network, yet being able to produce rules generalizing the network output for cases similar to a given input. The partial rules extracted for every training set sample are then merged to form a decision diagram (DD) from which logic rules can be extracted. A rule format explicitly separating subsets of inputs for which an answer is known from those with an undetermined answer is presented. A special data structure, the decision diagram, allowing efficient partial rule merging is introduced. With regard to rules' complexity and generalization abilities, LORE gives results comparable to those reported previously. An algorithm transforming DDs into interpretable boolean expressions is described. Experimental running times of rule extraction are proportional to the network's training time.