改进潜在德里希勒分配主题建模预处理的方法

IF 6.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
{"title":"改进潜在德里希勒分配主题建模预处理的方法","authors":"","doi":"10.1016/j.dss.2024.114310","DOIUrl":null,"url":null,"abstract":"<div><p>As a part of natural language processing (NLP), the intent of topic modeling is to identify topics in textual corpora with limited human input. Current topic modeling techniques, like Latent Dirichlet Allocation (LDA), are limited in the pre-processing steps and currently require human judgement, increasing analysis time and opportunities for error. The purpose of this research is to allay some of those limitations by introducing new approaches to improve coherence without adding computational complexity and provide an objective method for determining the number of topics within a corpus. First, we identify a requirement for a more robust stop words list and introduce a new dimensionality-reduction heuristic that exploits the number of words within a document to infer importance to word choice. Second, we develop an eigenvalue technique to determine the number of topics within a corpus. Third, we combine all of these techniques into the Zimm Approach, which produces higher quality results than LDA in determining the number of topics within a corpus. The Zimm Approach, when tested against various subsets of the 20newsgroup dataset, produced the correct number of topics in 7 of 9 subsets vs. 0 of 9 using highest coherence value produced by LDA.</p></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":null,"pages":null},"PeriodicalIF":6.7000,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Approaches to improve preprocessing for Latent Dirichlet Allocation topic modeling\",\"authors\":\"\",\"doi\":\"10.1016/j.dss.2024.114310\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>As a part of natural language processing (NLP), the intent of topic modeling is to identify topics in textual corpora with limited human input. Current topic modeling techniques, like Latent Dirichlet Allocation (LDA), are limited in the pre-processing steps and currently require human judgement, increasing analysis time and opportunities for error. The purpose of this research is to allay some of those limitations by introducing new approaches to improve coherence without adding computational complexity and provide an objective method for determining the number of topics within a corpus. First, we identify a requirement for a more robust stop words list and introduce a new dimensionality-reduction heuristic that exploits the number of words within a document to infer importance to word choice. Second, we develop an eigenvalue technique to determine the number of topics within a corpus. Third, we combine all of these techniques into the Zimm Approach, which produces higher quality results than LDA in determining the number of topics within a corpus. The Zimm Approach, when tested against various subsets of the 20newsgroup dataset, produced the correct number of topics in 7 of 9 subsets vs. 0 of 9 using highest coherence value produced by LDA.</p></div>\",\"PeriodicalId\":55181,\"journal\":{\"name\":\"Decision Support Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2024-08-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Decision Support Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S016792362400143X\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Decision Support Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S016792362400143X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

作为自然语言处理(NLP)的一部分,主题建模的目的是在有限的人工输入下识别文本语料库中的主题。当前的主题建模技术,如潜在德里希勒分配(LDA),在预处理步骤中受到限制,目前需要人工判断,从而增加了分析时间和出错机会。本研究的目的是通过引入新方法,在不增加计算复杂性的情况下提高一致性,并提供一种确定语料库中主题数量的客观方法,从而缓解上述限制。首先,我们确定了对更强大的停滞词列表的要求,并引入了一种新的降维启发式,利用文档中的单词数量来推断单词选择的重要性。其次,我们开发了一种特征值技术来确定语料库中的主题数量。第三,我们将所有这些技术结合到 Zimm 方法中,该方法在确定语料库中的主题数方面比 LDA 得出的结果质量更高。在对 20newsgroup 数据集的不同子集进行测试时,Zimm 方法在 9 个子集中的 7 个得出了正确的主题数,而使用 LDA 得出的最高一致性值则在 9 个子集中得出了 0 个正确的主题数。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Approaches to improve preprocessing for Latent Dirichlet Allocation topic modeling

As a part of natural language processing (NLP), the intent of topic modeling is to identify topics in textual corpora with limited human input. Current topic modeling techniques, like Latent Dirichlet Allocation (LDA), are limited in the pre-processing steps and currently require human judgement, increasing analysis time and opportunities for error. The purpose of this research is to allay some of those limitations by introducing new approaches to improve coherence without adding computational complexity and provide an objective method for determining the number of topics within a corpus. First, we identify a requirement for a more robust stop words list and introduce a new dimensionality-reduction heuristic that exploits the number of words within a document to infer importance to word choice. Second, we develop an eigenvalue technique to determine the number of topics within a corpus. Third, we combine all of these techniques into the Zimm Approach, which produces higher quality results than LDA in determining the number of topics within a corpus. The Zimm Approach, when tested against various subsets of the 20newsgroup dataset, produced the correct number of topics in 7 of 9 subsets vs. 0 of 9 using highest coherence value produced by LDA.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Decision Support Systems
Decision Support Systems 工程技术-计算机:人工智能
CiteScore
14.70
自引率
6.70%
发文量
119
审稿时长
13 months
期刊介绍: The common thread of articles published in Decision Support Systems is their relevance to theoretical and technical issues in the support of enhanced decision making. The areas addressed may include foundations, functionality, interfaces, implementation, impacts, and evaluation of decision support systems (DSSs).
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信