P. Paul, Md Shihab Uddin, M. T. Ahmed, Mohammed Moshiul Hoque, Maqsudur Rahman
{"title":"基于LDA和BERT-LDA的孟加拉语新闻语料语义主题提取","authors":"P. Paul, Md Shihab Uddin, M. T. Ahmed, Mohammed Moshiul Hoque, Maqsudur Rahman","doi":"10.1109/ICCIT57492.2022.10055173","DOIUrl":null,"url":null,"abstract":"In order to infer topics from unstructured text data, topic modeling techniques is extensively employed in the field of Natural Language Processing. Latent Dirichlet Allocation (LDA), a popular technique in topic modeling, can be used for the automatic identification of topics from a vast sample of textual documents. The LDA-based topic models, however, may not always yield good outcomes on their own. One of the most efficient unsupervised machine learning methods, clustering, is often employed in applications like topic modeling and information extraction from unstructured textual data. In our study, a hybrid clustering based approach using Bidirectional Encoder Representations from Transformers (BERT) and LDA for large Bangla textual dataset has been thoroughly investigated. The BERT has done the contextual embedding with LDA. The experiments on this hybrid model are carried out to show the efficiency of clustering similar topics from a noble dataset of Bangla news articles. The outcomes of the experiments demonstrate that clustering with BERT-LDA model would aid in the inference of more coherent topics. The maximum coherence value of 0.63 has been found for our noble dataset using LDA and for BERT-LDA model, the value is 0.66.","PeriodicalId":255498,"journal":{"name":"2022 25th International Conference on Computer and Information Technology (ICCIT)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Semantic Topic Extraction from Bangla News Corpus Using LDA and BERT-LDA\",\"authors\":\"P. Paul, Md Shihab Uddin, M. T. Ahmed, Mohammed Moshiul Hoque, Maqsudur Rahman\",\"doi\":\"10.1109/ICCIT57492.2022.10055173\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In order to infer topics from unstructured text data, topic modeling techniques is extensively employed in the field of Natural Language Processing. Latent Dirichlet Allocation (LDA), a popular technique in topic modeling, can be used for the automatic identification of topics from a vast sample of textual documents. The LDA-based topic models, however, may not always yield good outcomes on their own. One of the most efficient unsupervised machine learning methods, clustering, is often employed in applications like topic modeling and information extraction from unstructured textual data. In our study, a hybrid clustering based approach using Bidirectional Encoder Representations from Transformers (BERT) and LDA for large Bangla textual dataset has been thoroughly investigated. The BERT has done the contextual embedding with LDA. The experiments on this hybrid model are carried out to show the efficiency of clustering similar topics from a noble dataset of Bangla news articles. The outcomes of the experiments demonstrate that clustering with BERT-LDA model would aid in the inference of more coherent topics. The maximum coherence value of 0.63 has been found for our noble dataset using LDA and for BERT-LDA model, the value is 0.66.\",\"PeriodicalId\":255498,\"journal\":{\"name\":\"2022 25th International Conference on Computer and Information Technology (ICCIT)\",\"volume\":\"2 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 25th International Conference on Computer and Information Technology (ICCIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCIT57492.2022.10055173\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 25th International Conference on Computer and Information Technology (ICCIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCIT57492.2022.10055173","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Semantic Topic Extraction from Bangla News Corpus Using LDA and BERT-LDA
In order to infer topics from unstructured text data, topic modeling techniques is extensively employed in the field of Natural Language Processing. Latent Dirichlet Allocation (LDA), a popular technique in topic modeling, can be used for the automatic identification of topics from a vast sample of textual documents. The LDA-based topic models, however, may not always yield good outcomes on their own. One of the most efficient unsupervised machine learning methods, clustering, is often employed in applications like topic modeling and information extraction from unstructured textual data. In our study, a hybrid clustering based approach using Bidirectional Encoder Representations from Transformers (BERT) and LDA for large Bangla textual dataset has been thoroughly investigated. The BERT has done the contextual embedding with LDA. The experiments on this hybrid model are carried out to show the efficiency of clustering similar topics from a noble dataset of Bangla news articles. The outcomes of the experiments demonstrate that clustering with BERT-LDA model would aid in the inference of more coherent topics. The maximum coherence value of 0.63 has been found for our noble dataset using LDA and for BERT-LDA model, the value is 0.66.