{"title":"神经网络与逻辑规则的结合","authors":"Lujiang Zhang","doi":"10.1142/s1469026823500153","DOIUrl":null,"url":null,"abstract":"How to utilize symbolic knowledge in deep learning is an important problem. Deep neural networks are flexible and powerful, while symbolic knowledge has the virtue of interpretability and intuitiveness. It is necessary to combine the two together to inject symbolic knowledge into neural networks. We propose a novel approach to combine neural networks with logic rules. In this approach, task-specific supervised learning and policy-based reinforcement learning are performed alternately to train a neural model until convergence. The basic idea is to use supervised learning to train a deep model and use reinforcement learning to propel the deep model to meet logic rules. In the process of the policy gradient reinforcement learning, if a predicted output of a deep model meets all logical rules, the deep model is given a positive reward, otherwise, it is given a negative reward. By maximizing the expected rewards, the deep model can be gradually adjusted to meet logical constraints. We conduct experiments on the tasks of named entity recognition. The experimental results demonstrate the effectiveness of our method.","PeriodicalId":45994,"journal":{"name":"International Journal of Computational Intelligence and Applications","volume":" ","pages":""},"PeriodicalIF":0.8000,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Combining Neural Networks with Logic Rules\",\"authors\":\"Lujiang Zhang\",\"doi\":\"10.1142/s1469026823500153\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"How to utilize symbolic knowledge in deep learning is an important problem. Deep neural networks are flexible and powerful, while symbolic knowledge has the virtue of interpretability and intuitiveness. It is necessary to combine the two together to inject symbolic knowledge into neural networks. We propose a novel approach to combine neural networks with logic rules. In this approach, task-specific supervised learning and policy-based reinforcement learning are performed alternately to train a neural model until convergence. The basic idea is to use supervised learning to train a deep model and use reinforcement learning to propel the deep model to meet logic rules. In the process of the policy gradient reinforcement learning, if a predicted output of a deep model meets all logical rules, the deep model is given a positive reward, otherwise, it is given a negative reward. By maximizing the expected rewards, the deep model can be gradually adjusted to meet logical constraints. We conduct experiments on the tasks of named entity recognition. The experimental results demonstrate the effectiveness of our method.\",\"PeriodicalId\":45994,\"journal\":{\"name\":\"International Journal of Computational Intelligence and Applications\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.8000,\"publicationDate\":\"2023-03-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Computational Intelligence and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1142/s1469026823500153\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computational Intelligence and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s1469026823500153","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
How to utilize symbolic knowledge in deep learning is an important problem. Deep neural networks are flexible and powerful, while symbolic knowledge has the virtue of interpretability and intuitiveness. It is necessary to combine the two together to inject symbolic knowledge into neural networks. We propose a novel approach to combine neural networks with logic rules. In this approach, task-specific supervised learning and policy-based reinforcement learning are performed alternately to train a neural model until convergence. The basic idea is to use supervised learning to train a deep model and use reinforcement learning to propel the deep model to meet logic rules. In the process of the policy gradient reinforcement learning, if a predicted output of a deep model meets all logical rules, the deep model is given a positive reward, otherwise, it is given a negative reward. By maximizing the expected rewards, the deep model can be gradually adjusted to meet logical constraints. We conduct experiments on the tasks of named entity recognition. The experimental results demonstrate the effectiveness of our method.
期刊介绍:
The International Journal of Computational Intelligence and Applications, IJCIA, is a refereed journal dedicated to the theory and applications of computational intelligence (artificial neural networks, fuzzy systems, evolutionary computation and hybrid systems). The main goal of this journal is to provide the scientific community and industry with a vehicle whereby ideas using two or more conventional and computational intelligence based techniques could be discussed. The IJCIA welcomes original works in areas such as neural networks, fuzzy logic, evolutionary computation, pattern recognition, hybrid intelligent systems, symbolic machine learning, statistical models, image/audio/video compression and retrieval.