Dipshikha Biswas, Suneel Nadipalli, B. Sneha, Deepa Gupta, J. Amudha
{"title":"使用变压器和强化学习的自然问题生成","authors":"Dipshikha Biswas, Suneel Nadipalli, B. Sneha, Deepa Gupta, J. Amudha","doi":"10.1109/OCIT56763.2022.00061","DOIUrl":null,"url":null,"abstract":"Natural Question Generation (NQG) is among the most popular open research problems in Natural Language Processing (NLP) alongside Neural Machine Translation, Open Domain Chatbots, etc. Among the many approaches taken up to solve this problem, neural networks have been deemed the benchmark in this particular research area. This paper aims at adopting a generator - evaluator framework in a neural network architecture to allow additional focus on the context of the content used for framing a question. The generator uses NLP architectures like transformers (T5) to generate a question given a context while the evaluator uses Reinforcement Learning (RL) to check the correctness of the generated question. The involvement of RL has improved the results (as shown in Table 2), and there is increased computational efficiency as the training is coupled with the policy of RL. This turns the problem into a reinforcement learning task and allows for the generation of a wide range of questions for the same context-answer pair. The given algorithm is tested on the benchmark dataset - SQuAD with BLEU score as the evaluation metric","PeriodicalId":425541,"journal":{"name":"2022 OITS International Conference on Information Technology (OCIT)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Natural Question Generation using Transformers and Reinforcement Learning\",\"authors\":\"Dipshikha Biswas, Suneel Nadipalli, B. Sneha, Deepa Gupta, J. Amudha\",\"doi\":\"10.1109/OCIT56763.2022.00061\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Natural Question Generation (NQG) is among the most popular open research problems in Natural Language Processing (NLP) alongside Neural Machine Translation, Open Domain Chatbots, etc. Among the many approaches taken up to solve this problem, neural networks have been deemed the benchmark in this particular research area. This paper aims at adopting a generator - evaluator framework in a neural network architecture to allow additional focus on the context of the content used for framing a question. The generator uses NLP architectures like transformers (T5) to generate a question given a context while the evaluator uses Reinforcement Learning (RL) to check the correctness of the generated question. The involvement of RL has improved the results (as shown in Table 2), and there is increased computational efficiency as the training is coupled with the policy of RL. This turns the problem into a reinforcement learning task and allows for the generation of a wide range of questions for the same context-answer pair. The given algorithm is tested on the benchmark dataset - SQuAD with BLEU score as the evaluation metric\",\"PeriodicalId\":425541,\"journal\":{\"name\":\"2022 OITS International Conference on Information Technology (OCIT)\",\"volume\":\"54 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 OITS International Conference on Information Technology (OCIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/OCIT56763.2022.00061\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 OITS International Conference on Information Technology (OCIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/OCIT56763.2022.00061","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Natural Question Generation using Transformers and Reinforcement Learning
Natural Question Generation (NQG) is among the most popular open research problems in Natural Language Processing (NLP) alongside Neural Machine Translation, Open Domain Chatbots, etc. Among the many approaches taken up to solve this problem, neural networks have been deemed the benchmark in this particular research area. This paper aims at adopting a generator - evaluator framework in a neural network architecture to allow additional focus on the context of the content used for framing a question. The generator uses NLP architectures like transformers (T5) to generate a question given a context while the evaluator uses Reinforcement Learning (RL) to check the correctness of the generated question. The involvement of RL has improved the results (as shown in Table 2), and there is increased computational efficiency as the training is coupled with the policy of RL. This turns the problem into a reinforcement learning task and allows for the generation of a wide range of questions for the same context-answer pair. The given algorithm is tested on the benchmark dataset - SQuAD with BLEU score as the evaluation metric