{"title":"一种基于单词注意力网络的主题检测方法","authors":"Zhengwen Xie","doi":"10.2478/jdis-2021-0032","DOIUrl":null,"url":null,"abstract":"Abstract Purpose We proposed a method to represent scientific papers by a complex network, which combines the approaches of neural and complex networks. Design/methodology/approach Its novelty is representing a paper by a word branch, which carries the sequential structure of words in sentences. The branches are generated by the attention mechanism in deep learning models. We connected those branches at the positions of their common words to generate networks, called word-attention networks, and then detect their communities, defined as topics. Findings Those detected topics can carry the sequential structure of words in sentences, represent the intra- and inter-sentential dependencies among words, and reveal the roles of words playing in them by network indexes. Research limitations The parameter setting of our method may depend on practical data. Thus it needs human experience to find proper settings. Practical implications Our method is applied to the papers of the PNAS, where the discipline designations provided by authors are used as the golden labels of papers’ topics. Originality/value This empirical study shows that the proposed method outperforms the Latent Dirichlet Allocation and is more stable.","PeriodicalId":92237,"journal":{"name":"Journal of data and information science (Warsaw, Poland)","volume":"6 1","pages":"139 - 163"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A Topic Detection Method Based on Word-attention Networks\",\"authors\":\"Zhengwen Xie\",\"doi\":\"10.2478/jdis-2021-0032\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Purpose We proposed a method to represent scientific papers by a complex network, which combines the approaches of neural and complex networks. Design/methodology/approach Its novelty is representing a paper by a word branch, which carries the sequential structure of words in sentences. The branches are generated by the attention mechanism in deep learning models. We connected those branches at the positions of their common words to generate networks, called word-attention networks, and then detect their communities, defined as topics. Findings Those detected topics can carry the sequential structure of words in sentences, represent the intra- and inter-sentential dependencies among words, and reveal the roles of words playing in them by network indexes. Research limitations The parameter setting of our method may depend on practical data. Thus it needs human experience to find proper settings. Practical implications Our method is applied to the papers of the PNAS, where the discipline designations provided by authors are used as the golden labels of papers’ topics. Originality/value This empirical study shows that the proposed method outperforms the Latent Dirichlet Allocation and is more stable.\",\"PeriodicalId\":92237,\"journal\":{\"name\":\"Journal of data and information science (Warsaw, Poland)\",\"volume\":\"6 1\",\"pages\":\"139 - 163\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of data and information science (Warsaw, Poland)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2478/jdis-2021-0032\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of data and information science (Warsaw, Poland)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2478/jdis-2021-0032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Topic Detection Method Based on Word-attention Networks
Abstract Purpose We proposed a method to represent scientific papers by a complex network, which combines the approaches of neural and complex networks. Design/methodology/approach Its novelty is representing a paper by a word branch, which carries the sequential structure of words in sentences. The branches are generated by the attention mechanism in deep learning models. We connected those branches at the positions of their common words to generate networks, called word-attention networks, and then detect their communities, defined as topics. Findings Those detected topics can carry the sequential structure of words in sentences, represent the intra- and inter-sentential dependencies among words, and reveal the roles of words playing in them by network indexes. Research limitations The parameter setting of our method may depend on practical data. Thus it needs human experience to find proper settings. Practical implications Our method is applied to the papers of the PNAS, where the discipline designations provided by authors are used as the golden labels of papers’ topics. Originality/value This empirical study shows that the proposed method outperforms the Latent Dirichlet Allocation and is more stable.