{"title":"Evaluating topic quality using model clustering","authors":"V. Mehta, R. Caceres, K. Carter","doi":"10.1109/CIDM.2014.7008665","DOIUrl":null,"url":null,"abstract":"Topic modeling continues to grow as a popular technique for finding hidden patterns, as well as grouping collections of new types of text and non-text data. Recent years have witnessed a growing body of work in developing metrics and techniques for evaluating the quality of topic models and the topics they generate. This is particularly true for text data where significant attention has been given to the semantic interpretability of topics using measures such as coherence. It has been shown however that topic assessments based on coherence metrics do not always align well with human judgment. Other efforts have examined the utility of information-theoretic distance metrics for evaluating topic quality in connection with semantic interpretability. Although there has been progress in evaluating interpretability of topics, the existing intrinsic evaluation metrics do not address some of the other aspects of concern in topic modeling such as: the number of topics to select, the ability to align topics from different models, and assessing the quality of training data. Here we propose an alternative metric for characterizing topic quality that addresses all three aforementioned issues. Our approach is based on clustering topics, and using the silhouette measure, a popular clustering index, for characterizing the quality of topics. We illustrate the utility of this approach in addressing the other topic modeling concerns noted above. Since this metric is not focused on interpretability, we believe it can be applied more broadly to text as well as non-text data. In this paper however we focus on the application of this metric to archival and non-archival text data.","PeriodicalId":117542,"journal":{"name":"2014 IEEE Symposium on Computational Intelligence and Data Mining (CIDM)","volume":"485 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Symposium on Computational Intelligence and Data Mining (CIDM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIDM.2014.7008665","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15
Abstract
Topic modeling continues to grow as a popular technique for finding hidden patterns, as well as grouping collections of new types of text and non-text data. Recent years have witnessed a growing body of work in developing metrics and techniques for evaluating the quality of topic models and the topics they generate. This is particularly true for text data where significant attention has been given to the semantic interpretability of topics using measures such as coherence. It has been shown however that topic assessments based on coherence metrics do not always align well with human judgment. Other efforts have examined the utility of information-theoretic distance metrics for evaluating topic quality in connection with semantic interpretability. Although there has been progress in evaluating interpretability of topics, the existing intrinsic evaluation metrics do not address some of the other aspects of concern in topic modeling such as: the number of topics to select, the ability to align topics from different models, and assessing the quality of training data. Here we propose an alternative metric for characterizing topic quality that addresses all three aforementioned issues. Our approach is based on clustering topics, and using the silhouette measure, a popular clustering index, for characterizing the quality of topics. We illustrate the utility of this approach in addressing the other topic modeling concerns noted above. Since this metric is not focused on interpretability, we believe it can be applied more broadly to text as well as non-text data. In this paper however we focus on the application of this metric to archival and non-archival text data.