用GRU-BERT网络改进多文档摘要

Ehtesham Sana, N. Akhtar
{"title":"用GRU-BERT网络改进多文档摘要","authors":"Ehtesham Sana, N. Akhtar","doi":"10.1109/REEDCON57544.2023.10151372","DOIUrl":null,"url":null,"abstract":"Multi-document summarization has been a challenging task due to the difficulties in capturing essential information from multiple sources and generating coherent and non-redundant summaries. In this proposed model, we address these challenges by leveraging the power of two popular natural language processing techniques, Bidirectional Encoder Representations from Transformers (BERT) and Gated Recurrent Unit (GRU). The Document Understanding Conference (DUC) dataset, a widely recognized benchmark dataset for multi-document summarization, was used to train and evaluate the model. By using BERT to generate contextual embeddings and GRU to capture sequence information, the proposed method outperforms previous methods in terms of summarization quality metrics such as ROUGE (RecallOriented Understudy for Gisting Evaluation). The proposed model has significant potential for use in various applications, such as news summarization, document summarization, and automated content creation. This study demonstrates that combining BERT and GRU models can effectively capture the contextual and sequential information in multi-document summarization, leading to high-quality summaries that overcome the limitations of previous methods.","PeriodicalId":429116,"journal":{"name":"2023 International Conference on Recent Advances in Electrical, Electronics & Digital Healthcare Technologies (REEDCON)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improving Multi-Document Summarization with GRU-BERT Network\",\"authors\":\"Ehtesham Sana, N. Akhtar\",\"doi\":\"10.1109/REEDCON57544.2023.10151372\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multi-document summarization has been a challenging task due to the difficulties in capturing essential information from multiple sources and generating coherent and non-redundant summaries. In this proposed model, we address these challenges by leveraging the power of two popular natural language processing techniques, Bidirectional Encoder Representations from Transformers (BERT) and Gated Recurrent Unit (GRU). The Document Understanding Conference (DUC) dataset, a widely recognized benchmark dataset for multi-document summarization, was used to train and evaluate the model. By using BERT to generate contextual embeddings and GRU to capture sequence information, the proposed method outperforms previous methods in terms of summarization quality metrics such as ROUGE (RecallOriented Understudy for Gisting Evaluation). The proposed model has significant potential for use in various applications, such as news summarization, document summarization, and automated content creation. This study demonstrates that combining BERT and GRU models can effectively capture the contextual and sequential information in multi-document summarization, leading to high-quality summaries that overcome the limitations of previous methods.\",\"PeriodicalId\":429116,\"journal\":{\"name\":\"2023 International Conference on Recent Advances in Electrical, Electronics & Digital Healthcare Technologies (REEDCON)\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 International Conference on Recent Advances in Electrical, Electronics & Digital Healthcare Technologies (REEDCON)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/REEDCON57544.2023.10151372\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Recent Advances in Electrical, Electronics & Digital Healthcare Technologies (REEDCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/REEDCON57544.2023.10151372","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

多文档摘要一直是一项具有挑战性的任务,因为难以从多个来源捕获基本信息并生成连贯和非冗余的摘要。在这个提出的模型中,我们通过利用两种流行的自然语言处理技术的力量来解决这些挑战,这两种技术是来自变压器的双向编码器表示(BERT)和门控循环单元(GRU)。使用DUC (Document Understanding Conference)数据集作为多文档摘要的基准数据集,对模型进行训练和评估。通过使用BERT生成上下文嵌入,GRU捕获序列信息,该方法在摘要质量度量方面优于以前的方法,如ROUGE(面向回忆的注册评价替代研究)。所建议的模型在各种应用程序中具有重要的应用潜力,例如新闻摘要、文档摘要和自动内容创建。本研究表明,BERT和GRU模型相结合可以有效地捕获多文档摘要中的上下文和顺序信息,从而克服了以往方法的局限性,获得高质量的摘要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Improving Multi-Document Summarization with GRU-BERT Network
Multi-document summarization has been a challenging task due to the difficulties in capturing essential information from multiple sources and generating coherent and non-redundant summaries. In this proposed model, we address these challenges by leveraging the power of two popular natural language processing techniques, Bidirectional Encoder Representations from Transformers (BERT) and Gated Recurrent Unit (GRU). The Document Understanding Conference (DUC) dataset, a widely recognized benchmark dataset for multi-document summarization, was used to train and evaluate the model. By using BERT to generate contextual embeddings and GRU to capture sequence information, the proposed method outperforms previous methods in terms of summarization quality metrics such as ROUGE (RecallOriented Understudy for Gisting Evaluation). The proposed model has significant potential for use in various applications, such as news summarization, document summarization, and automated content creation. This study demonstrates that combining BERT and GRU models can effectively capture the contextual and sequential information in multi-document summarization, leading to high-quality summaries that overcome the limitations of previous methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信