基于GAN网络的强化学习多跳推理方法

Zhicai Gao, Xiaoze Gong, Yongli Wang
{"title":"基于GAN网络的强化学习多跳推理方法","authors":"Zhicai Gao, Xiaoze Gong, Yongli Wang","doi":"10.1117/12.2671176","DOIUrl":null,"url":null,"abstract":"At present, the academic community has carried out some research on knowledge reasoning using Reinforcement Learning (RL), which has achieved good results in multi-hop reasoning. However, these methods often need to manually design the reward function to adapt to a specific dataset. For different datasets, the reward function in RL-based methods needs to be manually adjusted to obtain good performance. To solve this problem, an agent training model combined with Generative Adversarial Networks (GAN) is proposed. The model consists of two modules: a generative adversarial inference engine and a sampler. The sampler uses a policy-based bidirectional breadth-first search method to find the demonstration path, and the agent uses the reward considering the information of the neighborhood entities as the initial reward function. After sufficient adversarial training between the agent and the discriminator, the policy-based agent can find evidence paths that match the demonstration distribution and synthesize these evidence paths to make predictions. Experiments show that the model achieves better results in both fact prediction and link prediction tasks.","PeriodicalId":227528,"journal":{"name":"International Conference on Artificial Intelligence and Computer Engineering (ICAICE 2022)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reinforcement learning multi-hop reasoning method with GAN network\",\"authors\":\"Zhicai Gao, Xiaoze Gong, Yongli Wang\",\"doi\":\"10.1117/12.2671176\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"At present, the academic community has carried out some research on knowledge reasoning using Reinforcement Learning (RL), which has achieved good results in multi-hop reasoning. However, these methods often need to manually design the reward function to adapt to a specific dataset. For different datasets, the reward function in RL-based methods needs to be manually adjusted to obtain good performance. To solve this problem, an agent training model combined with Generative Adversarial Networks (GAN) is proposed. The model consists of two modules: a generative adversarial inference engine and a sampler. The sampler uses a policy-based bidirectional breadth-first search method to find the demonstration path, and the agent uses the reward considering the information of the neighborhood entities as the initial reward function. After sufficient adversarial training between the agent and the discriminator, the policy-based agent can find evidence paths that match the demonstration distribution and synthesize these evidence paths to make predictions. Experiments show that the model achieves better results in both fact prediction and link prediction tasks.\",\"PeriodicalId\":227528,\"journal\":{\"name\":\"International Conference on Artificial Intelligence and Computer Engineering (ICAICE 2022)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Artificial Intelligence and Computer Engineering (ICAICE 2022)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.2671176\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Artificial Intelligence and Computer Engineering (ICAICE 2022)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2671176","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

目前,学术界已经开展了一些利用强化学习(Reinforcement Learning, RL)进行知识推理的研究,并在多跳推理中取得了较好的效果。然而,这些方法通常需要手动设计奖励函数以适应特定的数据集。对于不同的数据集,基于强化学习的方法中的奖励函数需要手动调整才能获得良好的性能。为了解决这一问题,提出了一种结合生成式对抗网络(GAN)的智能体训练模型。该模型由两个模块组成:生成式对抗推理引擎和采样器。采样器采用基于策略的双向广度优先搜索方法寻找演示路径,agent采用考虑邻域实体信息的奖励作为初始奖励函数。策略智能体与判别器之间经过充分的对抗性训练后,可以找到与演示分布匹配的证据路径,并综合这些证据路径进行预测。实验表明,该模型在事实预测和链路预测任务中都取得了较好的效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Reinforcement learning multi-hop reasoning method with GAN network
At present, the academic community has carried out some research on knowledge reasoning using Reinforcement Learning (RL), which has achieved good results in multi-hop reasoning. However, these methods often need to manually design the reward function to adapt to a specific dataset. For different datasets, the reward function in RL-based methods needs to be manually adjusted to obtain good performance. To solve this problem, an agent training model combined with Generative Adversarial Networks (GAN) is proposed. The model consists of two modules: a generative adversarial inference engine and a sampler. The sampler uses a policy-based bidirectional breadth-first search method to find the demonstration path, and the agent uses the reward considering the information of the neighborhood entities as the initial reward function. After sufficient adversarial training between the agent and the discriminator, the policy-based agent can find evidence paths that match the demonstration distribution and synthesize these evidence paths to make predictions. Experiments show that the model achieves better results in both fact prediction and link prediction tasks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信