使用变压器和强化学习的自然问题生成

Dipshikha Biswas, Suneel Nadipalli, B. Sneha, Deepa Gupta, J. Amudha
{"title":"使用变压器和强化学习的自然问题生成","authors":"Dipshikha Biswas, Suneel Nadipalli, B. Sneha, Deepa Gupta, J. Amudha","doi":"10.1109/OCIT56763.2022.00061","DOIUrl":null,"url":null,"abstract":"Natural Question Generation (NQG) is among the most popular open research problems in Natural Language Processing (NLP) alongside Neural Machine Translation, Open Domain Chatbots, etc. Among the many approaches taken up to solve this problem, neural networks have been deemed the benchmark in this particular research area. This paper aims at adopting a generator - evaluator framework in a neural network architecture to allow additional focus on the context of the content used for framing a question. The generator uses NLP architectures like transformers (T5) to generate a question given a context while the evaluator uses Reinforcement Learning (RL) to check the correctness of the generated question. The involvement of RL has improved the results (as shown in Table 2), and there is increased computational efficiency as the training is coupled with the policy of RL. This turns the problem into a reinforcement learning task and allows for the generation of a wide range of questions for the same context-answer pair. The given algorithm is tested on the benchmark dataset - SQuAD with BLEU score as the evaluation metric","PeriodicalId":425541,"journal":{"name":"2022 OITS International Conference on Information Technology (OCIT)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Natural Question Generation using Transformers and Reinforcement Learning\",\"authors\":\"Dipshikha Biswas, Suneel Nadipalli, B. Sneha, Deepa Gupta, J. Amudha\",\"doi\":\"10.1109/OCIT56763.2022.00061\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Natural Question Generation (NQG) is among the most popular open research problems in Natural Language Processing (NLP) alongside Neural Machine Translation, Open Domain Chatbots, etc. Among the many approaches taken up to solve this problem, neural networks have been deemed the benchmark in this particular research area. This paper aims at adopting a generator - evaluator framework in a neural network architecture to allow additional focus on the context of the content used for framing a question. The generator uses NLP architectures like transformers (T5) to generate a question given a context while the evaluator uses Reinforcement Learning (RL) to check the correctness of the generated question. The involvement of RL has improved the results (as shown in Table 2), and there is increased computational efficiency as the training is coupled with the policy of RL. This turns the problem into a reinforcement learning task and allows for the generation of a wide range of questions for the same context-answer pair. The given algorithm is tested on the benchmark dataset - SQuAD with BLEU score as the evaluation metric\",\"PeriodicalId\":425541,\"journal\":{\"name\":\"2022 OITS International Conference on Information Technology (OCIT)\",\"volume\":\"54 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 OITS International Conference on Information Technology (OCIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/OCIT56763.2022.00061\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 OITS International Conference on Information Technology (OCIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/OCIT56763.2022.00061","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

自然问题生成(NQG)是自然语言处理(NLP)中最受欢迎的开放研究问题之一,与神经机器翻译、开放域聊天机器人等一起。在解决这一问题的许多方法中,神经网络被认为是这一特定研究领域的基准。本文旨在采用神经网络架构中的生成器-评估器框架,以允许额外关注用于构建问题的内容的上下文。生成器使用像变压器(T5)这样的NLP架构来生成给定上下文的问题,而评估器使用强化学习(RL)来检查生成问题的正确性。RL的参与改善了结果(如表2所示),并且随着训练与RL策略的结合,计算效率也有所提高。这将问题转化为强化学习任务,并允许为相同的上下文-答案对生成广泛的问题。以BLEU分数作为评价指标,在基准数据集SQuAD上对给定算法进行了测试
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Natural Question Generation using Transformers and Reinforcement Learning
Natural Question Generation (NQG) is among the most popular open research problems in Natural Language Processing (NLP) alongside Neural Machine Translation, Open Domain Chatbots, etc. Among the many approaches taken up to solve this problem, neural networks have been deemed the benchmark in this particular research area. This paper aims at adopting a generator - evaluator framework in a neural network architecture to allow additional focus on the context of the content used for framing a question. The generator uses NLP architectures like transformers (T5) to generate a question given a context while the evaluator uses Reinforcement Learning (RL) to check the correctness of the generated question. The involvement of RL has improved the results (as shown in Table 2), and there is increased computational efficiency as the training is coupled with the policy of RL. This turns the problem into a reinforcement learning task and allows for the generation of a wide range of questions for the same context-answer pair. The given algorithm is tested on the benchmark dataset - SQuAD with BLEU score as the evaluation metric
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信