{"title":"Inserting rules into recurrent neural networks","authors":"Colin Giles, C. Omlin","doi":"10.1109/NNSP.1992.253712","DOIUrl":null,"url":null,"abstract":"The authors present a method that incorporates a priori knowledge in the training of recurrent neural networks. This a priori knowledge can be interpreted as hints about the problem to be learned and these hints are encoded as rules which are then inserted into the neural network. The authors demonstrate the approach by training recurrent neural networks with inserted rules to learn to recognize regular languages from grammatical string examples. Because the recurrent networks have second-order connections, rule-insertion is a straightforward mapping of rules into weights and neurons. Simulations show that training recurrent networks with different amounts of partial knowledge to recognize simple grammers improves the training times by orders of magnitude, even when only a small fraction of all transitions are inserted as rules. In addition, there appears to be no loss in generalization performance.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"129 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"41","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NNSP.1992.253712","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 41
Abstract
The authors present a method that incorporates a priori knowledge in the training of recurrent neural networks. This a priori knowledge can be interpreted as hints about the problem to be learned and these hints are encoded as rules which are then inserted into the neural network. The authors demonstrate the approach by training recurrent neural networks with inserted rules to learn to recognize regular languages from grammatical string examples. Because the recurrent networks have second-order connections, rule-insertion is a straightforward mapping of rules into weights and neurons. Simulations show that training recurrent networks with different amounts of partial knowledge to recognize simple grammers improves the training times by orders of magnitude, even when only a small fraction of all transitions are inserted as rules. In addition, there appears to be no loss in generalization performance.<>