主题模型的后验推理方法

Xuan Bui, Tu Vu, Khoat Than
{"title":"主题模型的后验推理方法","authors":"Xuan Bui, Tu Vu, Khoat Than","doi":"10.32913/RD-ICT.VOL2.NO15.687","DOIUrl":null,"url":null,"abstract":"The problem of posterior inference for individual documents is particularly important in topic models. However, it is often intractable in practice. Many existing methods for posterior inference such as variational Bayes, collapsed variational Bayes and collapsed Gibbs sampling do not have any guarantee on either quality or rate of convergence. The online maximum a posteriori estimation (OPE) algorithm has more attractive properties than other inference approaches. In this paper, we introduced four algorithms to improve OPE (namely, OPE1, OPE2, OPE3, and OPE4) by combining two stochastic bounds. Our new algorithms not only preserve the key advantages of OPE but also can sometimes perform significantly better than OPE. These algorithms were employed to develop new effective methods for learning topic models from massive/streaming text collections. Empirical results show that our approaches were often more efficient than the state-of-theart methods. DOI: 10.32913/rd-ict.vol2.no15.687","PeriodicalId":432355,"journal":{"name":"Research and Development on Information and Communication Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Some Methods for Posterior Inference in Topic Models\",\"authors\":\"Xuan Bui, Tu Vu, Khoat Than\",\"doi\":\"10.32913/RD-ICT.VOL2.NO15.687\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The problem of posterior inference for individual documents is particularly important in topic models. However, it is often intractable in practice. Many existing methods for posterior inference such as variational Bayes, collapsed variational Bayes and collapsed Gibbs sampling do not have any guarantee on either quality or rate of convergence. The online maximum a posteriori estimation (OPE) algorithm has more attractive properties than other inference approaches. In this paper, we introduced four algorithms to improve OPE (namely, OPE1, OPE2, OPE3, and OPE4) by combining two stochastic bounds. Our new algorithms not only preserve the key advantages of OPE but also can sometimes perform significantly better than OPE. These algorithms were employed to develop new effective methods for learning topic models from massive/streaming text collections. Empirical results show that our approaches were often more efficient than the state-of-theart methods. DOI: 10.32913/rd-ict.vol2.no15.687\",\"PeriodicalId\":432355,\"journal\":{\"name\":\"Research and Development on Information and Communication Technology\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-08-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Research and Development on Information and Communication Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.32913/RD-ICT.VOL2.NO15.687\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research and Development on Information and Communication Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.32913/RD-ICT.VOL2.NO15.687","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在主题模型中,单个文档的后验推理问题尤为重要。然而,在实践中往往难以解决。许多现有的后验推理方法,如变分贝叶斯、崩溃变分贝叶斯和崩溃吉布斯抽样,在质量和收敛速度上都没有任何保证。在线最大后验估计(OPE)算法比其他推理方法具有更吸引人的特性。本文介绍了结合两个随机边界改进OPE的四种算法(即OPE1、OPE2、OPE3和OPE4)。我们的新算法不仅保留了OPE的主要优点,而且有时性能明显优于OPE。这些算法被用来开发新的有效方法来从海量/流文本集合中学习主题模型。实证结果表明,我们的方法往往比最先进的方法更有效。DOI: 10.32913 / rd-ict.vol2.no15.687
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Some Methods for Posterior Inference in Topic Models
The problem of posterior inference for individual documents is particularly important in topic models. However, it is often intractable in practice. Many existing methods for posterior inference such as variational Bayes, collapsed variational Bayes and collapsed Gibbs sampling do not have any guarantee on either quality or rate of convergence. The online maximum a posteriori estimation (OPE) algorithm has more attractive properties than other inference approaches. In this paper, we introduced four algorithms to improve OPE (namely, OPE1, OPE2, OPE3, and OPE4) by combining two stochastic bounds. Our new algorithms not only preserve the key advantages of OPE but also can sometimes perform significantly better than OPE. These algorithms were employed to develop new effective methods for learning topic models from massive/streaming text collections. Empirical results show that our approaches were often more efficient than the state-of-theart methods. DOI: 10.32913/rd-ict.vol2.no15.687
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信