{"title":"一种基于潜在狄利克雷分配的本体术语提取方法","authors":"Y. Jing, Wang Junli, Zhao Xiaodong","doi":"10.1109/MINES.2012.71","DOIUrl":null,"url":null,"abstract":"Ontology plays an important part on Semantic Web, Information Retrieval, and Intelligent Information Integration etc. Ontology learning gets widely studied due to many problems in totally manual ontology construction. Term extraction influences many respects of ontology learning as it's the basis of ontology learning hierarchical structure. This paper mines topics of the corpus based on Latent Dirichlet Allocation (LDA) which uses Variational Inference and Expectation-Maximization (EM) Algorithm to estimate model parameters. With the help of irrelevant vocabulary, the paper provides better experimental results which show that the distribution of topics on terms reveals latent semantic features of the corpus and relevance among words.","PeriodicalId":208089,"journal":{"name":"2012 Fourth International Conference on Multimedia Information Networking and Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"An Ontology Term Extracting Method Based on Latent Dirichlet Allocation\",\"authors\":\"Y. Jing, Wang Junli, Zhao Xiaodong\",\"doi\":\"10.1109/MINES.2012.71\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Ontology plays an important part on Semantic Web, Information Retrieval, and Intelligent Information Integration etc. Ontology learning gets widely studied due to many problems in totally manual ontology construction. Term extraction influences many respects of ontology learning as it's the basis of ontology learning hierarchical structure. This paper mines topics of the corpus based on Latent Dirichlet Allocation (LDA) which uses Variational Inference and Expectation-Maximization (EM) Algorithm to estimate model parameters. With the help of irrelevant vocabulary, the paper provides better experimental results which show that the distribution of topics on terms reveals latent semantic features of the corpus and relevance among words.\",\"PeriodicalId\":208089,\"journal\":{\"name\":\"2012 Fourth International Conference on Multimedia Information Networking and Security\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-11-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 Fourth International Conference on Multimedia Information Networking and Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MINES.2012.71\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 Fourth International Conference on Multimedia Information Networking and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MINES.2012.71","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Ontology Term Extracting Method Based on Latent Dirichlet Allocation
Ontology plays an important part on Semantic Web, Information Retrieval, and Intelligent Information Integration etc. Ontology learning gets widely studied due to many problems in totally manual ontology construction. Term extraction influences many respects of ontology learning as it's the basis of ontology learning hierarchical structure. This paper mines topics of the corpus based on Latent Dirichlet Allocation (LDA) which uses Variational Inference and Expectation-Maximization (EM) Algorithm to estimate model parameters. With the help of irrelevant vocabulary, the paper provides better experimental results which show that the distribution of topics on terms reveals latent semantic features of the corpus and relevance among words.