Joint Hierarchical Semantic Clipping and Sentence Extraction for Document Summarization

Wanying Yan, Junjun Guo
{"title":"Joint Hierarchical Semantic Clipping and Sentence Extraction for Document Summarization","authors":"Wanying Yan, Junjun Guo","doi":"10.3745/JIPS.04.0181","DOIUrl":null,"url":null,"abstract":"Extractive document summarization aims to select a few sentences while preserving its main information on a given document, but the current extractive methods do not consider the sentence-information repeat problem especially for news document summarization. In view of the importance and redundancy of news text information, in this paper, we propose a neural extractive summarization approach with joint sentence semantic clipping and selection, which can effectively solve the problem of news text summary sentence repetition. Specifically, a hierarchical selective encoding network is constructed for both sentence-level and documentlevel document representations, and data containing important information is extracted on news text; a sentence extractor strategy is then adopted for joint scoring and redundant information clipping. This way, our model strikes a balance between important information extraction and redundant information filtering. Experimental results on both CNN/Daily Mail dataset and Court Public Opinion News dataset we built are presented to show the effectiveness of our proposed approach in terms of ROUGE metrics, especially for redundant information filtering.","PeriodicalId":415161,"journal":{"name":"J. Inf. Process. Syst.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"J. Inf. Process. Syst.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3745/JIPS.04.0181","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Extractive document summarization aims to select a few sentences while preserving its main information on a given document, but the current extractive methods do not consider the sentence-information repeat problem especially for news document summarization. In view of the importance and redundancy of news text information, in this paper, we propose a neural extractive summarization approach with joint sentence semantic clipping and selection, which can effectively solve the problem of news text summary sentence repetition. Specifically, a hierarchical selective encoding network is constructed for both sentence-level and documentlevel document representations, and data containing important information is extracted on news text; a sentence extractor strategy is then adopted for joint scoring and redundant information clipping. This way, our model strikes a balance between important information extraction and redundant information filtering. Experimental results on both CNN/Daily Mail dataset and Court Public Opinion News dataset we built are presented to show the effectiveness of our proposed approach in terms of ROUGE metrics, especially for redundant information filtering.
面向文档摘要的联合分层语义裁剪和句子提取
摘要摘要的目的是在保留主要信息的前提下,在给定的文档中选择少量的句子,但目前的提取方法没有考虑句子-信息重复问题,特别是新闻文档摘要。针对新闻文本信息的重要性和冗余性,本文提出了一种联合句语义裁剪和选择的神经抽取摘要方法,可以有效地解决新闻文本摘要句重复的问题。具体而言,构建了句子级和文档级的分层选择编码网络,在新闻文本上提取包含重要信息的数据;然后采用句子提取器策略进行联合评分和冗余信息裁剪。通过这种方式,我们的模型在重要信息提取和冗余信息过滤之间取得了平衡。在CNN/Daily Mail数据集和法院民意新闻数据集上的实验结果显示了我们提出的方法在ROUGE指标方面的有效性,特别是在冗余信息过滤方面。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信