基于序列到序列模型的神经抽象文本摘要

Tian Shi, Yaser Keneshloo, Naren Ramakrishnan, C. Reddy
{"title":"基于序列到序列模型的神经抽象文本摘要","authors":"Tian Shi, Yaser Keneshloo, Naren Ramakrishnan, C. Reddy","doi":"10.1145/3419106","DOIUrl":null,"url":null,"abstract":"In the past few years, neural abstractive text summarization with sequence-to-sequence (seq2seq) models have gained a lot of popularity. Many interesting techniques have been proposed to improve seq2seq models, making them capable of handling different challenges, such as saliency, fluency and human readability, and generate high-quality summaries. Generally speaking, most of these techniques differ in one of these three categories: network structure, parameter inference, and decoding/generation. There are also other concerns, such as efficiency and parallelism for training a model. In this article, we provide a comprehensive literature survey on different seq2seq models for abstractive text summarization from the viewpoint of network structures, training strategies, and summary generation algorithms. Several models were first proposed for language modeling and generation tasks, such as machine translation, and later applied to abstractive text summarization. Hence, we also provide a brief review of these models. As part of this survey, we also develop an open source library, namely, Neural Abstractive Text Summarizer (NATS) toolkit, for the abstractive text summarization. An extensive set of experiments have been conducted on the widely used CNN/Daily Mail dataset to examine the effectiveness of several different neural network components. Finally, we benchmark two models implemented in NATS on the two recently released datasets, namely, Newsroom and Bytecup.","PeriodicalId":93404,"journal":{"name":"ACM/IMS transactions on data science","volume":"1197 1","pages":"1 - 37"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"166","resultStr":"{\"title\":\"Neural Abstractive Text Summarization with Sequence-to-Sequence Models\",\"authors\":\"Tian Shi, Yaser Keneshloo, Naren Ramakrishnan, C. Reddy\",\"doi\":\"10.1145/3419106\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the past few years, neural abstractive text summarization with sequence-to-sequence (seq2seq) models have gained a lot of popularity. Many interesting techniques have been proposed to improve seq2seq models, making them capable of handling different challenges, such as saliency, fluency and human readability, and generate high-quality summaries. Generally speaking, most of these techniques differ in one of these three categories: network structure, parameter inference, and decoding/generation. There are also other concerns, such as efficiency and parallelism for training a model. In this article, we provide a comprehensive literature survey on different seq2seq models for abstractive text summarization from the viewpoint of network structures, training strategies, and summary generation algorithms. Several models were first proposed for language modeling and generation tasks, such as machine translation, and later applied to abstractive text summarization. Hence, we also provide a brief review of these models. As part of this survey, we also develop an open source library, namely, Neural Abstractive Text Summarizer (NATS) toolkit, for the abstractive text summarization. An extensive set of experiments have been conducted on the widely used CNN/Daily Mail dataset to examine the effectiveness of several different neural network components. Finally, we benchmark two models implemented in NATS on the two recently released datasets, namely, Newsroom and Bytecup.\",\"PeriodicalId\":93404,\"journal\":{\"name\":\"ACM/IMS transactions on data science\",\"volume\":\"1197 1\",\"pages\":\"1 - 37\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"166\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM/IMS transactions on data science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3419106\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM/IMS transactions on data science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3419106","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 166

摘要

近年来,基于序列到序列(sequence-to-sequence, seq2seq)模型的神经抽象文本摘要得到了广泛的应用。已经提出了许多有趣的技术来改进seq2seq模型,使它们能够处理不同的挑战,例如显著性、流畅性和人类可读性,并生成高质量的摘要。一般来说,大多数这些技术在以下三类中的某一类有所不同:网络结构、参数推断和解码/生成。还有其他问题,比如训练模型的效率和并行性。在本文中,我们从网络结构、训练策略和摘要生成算法的角度对不同的用于抽象文本摘要的seq2seq模型进行了全面的文献综述。最初提出了几个模型用于语言建模和生成任务,如机器翻译,后来应用于抽象文本摘要。因此,我们也提供了这些模型的简要回顾。作为调查的一部分,我们还开发了一个开源库,即神经抽象文本摘要器(NATS)工具包,用于抽象文本摘要。在广泛使用的CNN/Daily Mail数据集上进行了一系列广泛的实验,以检验几种不同神经网络组件的有效性。最后,我们对NATS中实现的两个模型在最近发布的两个数据集上进行了基准测试,即Newsroom和Bytecup。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Neural Abstractive Text Summarization with Sequence-to-Sequence Models
In the past few years, neural abstractive text summarization with sequence-to-sequence (seq2seq) models have gained a lot of popularity. Many interesting techniques have been proposed to improve seq2seq models, making them capable of handling different challenges, such as saliency, fluency and human readability, and generate high-quality summaries. Generally speaking, most of these techniques differ in one of these three categories: network structure, parameter inference, and decoding/generation. There are also other concerns, such as efficiency and parallelism for training a model. In this article, we provide a comprehensive literature survey on different seq2seq models for abstractive text summarization from the viewpoint of network structures, training strategies, and summary generation algorithms. Several models were first proposed for language modeling and generation tasks, such as machine translation, and later applied to abstractive text summarization. Hence, we also provide a brief review of these models. As part of this survey, we also develop an open source library, namely, Neural Abstractive Text Summarizer (NATS) toolkit, for the abstractive text summarization. An extensive set of experiments have been conducted on the widely used CNN/Daily Mail dataset to examine the effectiveness of several different neural network components. Finally, we benchmark two models implemented in NATS on the two recently released datasets, namely, Newsroom and Bytecup.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信