{"title":"Partitioning algorithms for improving efficiency of topic modeling parallelization","authors":"Hung Nghiep Tran, A. Takasu","doi":"10.1109/PACRIM.2015.7334854","DOIUrl":null,"url":null,"abstract":"Topic modeling is a very powerful technique in data analysis and data mining but it is generally slow. Many parallelization approaches have been proposed to speed up the learning process. However, they are usually not very efficient because of the many kinds of overhead, especially the load-balancing problem. We address this problem by proposing three partitioning algorithms, which either run more quickly or achieve better load balance than current partitioning algorithms. These algorithms can easily be extended to improve parallelization efficiency on other topic models similar to LDA, e.g., Bag of Timestamps, which is an extension of LDA with time information. We evaluate these algorithms on two popular datasets, NIPS and NYTimes. We also build a dataset containing over 1,000,000 scientific publications in the computer science domain from 1951 to 2010 to experiment with Bag of Timestamps parallelization, which we design to demonstrate the proposed algorithms' extensibility. The results strongly confirm the advantages of these algorithms.","PeriodicalId":350052,"journal":{"name":"2015 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PACRIM.2015.7334854","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Topic modeling is a very powerful technique in data analysis and data mining but it is generally slow. Many parallelization approaches have been proposed to speed up the learning process. However, they are usually not very efficient because of the many kinds of overhead, especially the load-balancing problem. We address this problem by proposing three partitioning algorithms, which either run more quickly or achieve better load balance than current partitioning algorithms. These algorithms can easily be extended to improve parallelization efficiency on other topic models similar to LDA, e.g., Bag of Timestamps, which is an extension of LDA with time information. We evaluate these algorithms on two popular datasets, NIPS and NYTimes. We also build a dataset containing over 1,000,000 scientific publications in the computer science domain from 1951 to 2010 to experiment with Bag of Timestamps parallelization, which we design to demonstrate the proposed algorithms' extensibility. The results strongly confirm the advantages of these algorithms.
主题建模在数据分析和数据挖掘中是一种非常强大的技术,但它通常很慢。许多并行化方法被提出来加快学习过程。然而,由于各种各样的开销,特别是负载平衡问题,它们通常不是很有效。我们通过提出三种分区算法来解决这个问题,它们要么比当前的分区算法运行得更快,要么实现更好的负载平衡。这些算法可以很容易地扩展,以提高在其他类似LDA的主题模型上的并行化效率,例如,Bag of Timestamps,它是LDA的扩展,带有时间信息。我们在两个流行的数据集NIPS和NYTimes上评估了这些算法。我们还建立了一个包含1951年至2010年计算机科学领域超过1,000,000篇科学出版物的数据集,以实验时间戳袋并行化,我们设计了该算法来证明所提出算法的可扩展性。实验结果有力地证实了这些算法的优越性。