A novel method for document summarization using Word2Vec

Zhibo Wang, Long Ma, Yanqing Zhang
{"title":"A novel method for document summarization using Word2Vec","authors":"Zhibo Wang, Long Ma, Yanqing Zhang","doi":"10.1109/ICCI-CC.2016.7862087","DOIUrl":null,"url":null,"abstract":"Texting mining is a process to extract useful patterns and information from large volume of unstructured text data. Unlike other quantitative data, unstructured text data cannot be directly utilized in machine learning models. Hence, data pre-processing is an essential step to remove vague or redundant data such as punctuations, stop-words, low-frequency words in the corpus, and re-organize the data in a format that computers can understand. Though existing approaches are able to eliminate some symbols and stop-words during the pre-processing step, a portion of words are not used to describe the documents' topics. These irrelevant words not only waste the storage that lessen the efficiency of computing, but also lead to confounding results. In this paper, we propose an optimization method to further remove these irrelevant words which are not highly correlated to the documents' topics. Experimental results indicate that our proposed method significantly compresses the documents, while the resulting documents remain a high discrimination in classification tasks; additionally, storage is greatly reduced according to various criteria.","PeriodicalId":135701,"journal":{"name":"2016 IEEE 15th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE 15th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCI-CC.2016.7862087","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Texting mining is a process to extract useful patterns and information from large volume of unstructured text data. Unlike other quantitative data, unstructured text data cannot be directly utilized in machine learning models. Hence, data pre-processing is an essential step to remove vague or redundant data such as punctuations, stop-words, low-frequency words in the corpus, and re-organize the data in a format that computers can understand. Though existing approaches are able to eliminate some symbols and stop-words during the pre-processing step, a portion of words are not used to describe the documents' topics. These irrelevant words not only waste the storage that lessen the efficiency of computing, but also lead to confounding results. In this paper, we propose an optimization method to further remove these irrelevant words which are not highly correlated to the documents' topics. Experimental results indicate that our proposed method significantly compresses the documents, while the resulting documents remain a high discrimination in classification tasks; additionally, storage is greatly reduced according to various criteria.
一种基于Word2Vec的文档摘要新方法
文本挖掘是从大量的非结构化文本数据中提取有用模式和信息的过程。与其他定量数据不同,非结构化文本数据不能直接用于机器学习模型。因此,数据预处理是去除语料库中标点、停顿词、低频词等模糊或冗余的数据,并以计算机可以理解的格式重新组织数据的必要步骤。虽然现有的方法能够在预处理过程中消除一些符号和停止词,但仍有一部分词没有被用来描述文档的主题。这些不相关的词不仅浪费了存储空间,降低了计算效率,而且还会导致混淆结果。在本文中,我们提出了一种优化方法来进一步去除这些与文档主题不高度相关的无关词。实验结果表明,本文提出的方法在有效压缩文档的同时,在分类任务中保持了较高的识别率;此外,根据各种标准,存储空间大大减少。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信