S. Slavov, Andrey Tagarev, Nikola Tulechki, S. Boytcheva
{"title":"基于神经和注意力学习模型的公司行业分类","authors":"S. Slavov, Andrey Tagarev, Nikola Tulechki, S. Boytcheva","doi":"10.1109/BdKCSE48644.2019.9010667","DOIUrl":null,"url":null,"abstract":"This paper compares different solutions for the task of classifying companies with an industry classification scheme. Recent advances in deep learning methods show better performance in the text classification task. The dataset consists of short textual descriptions of companies and their economic activities. Target classification schemes are built by mapping related open data in a semi-controlled manner. Target classes are built from the bottom up by DBpedia. For the experiments are used modifications of methods BERT, XLNet, Glove and ULMfit with pre-trained models for English. Two simple models with perceptron architecture are used as the baseline. The results show that the best performance for multi-label classification of DBpedia companies abstracts is achieved by BERT and XLnet models, even for unbalanced classes.","PeriodicalId":206080,"journal":{"name":"2019 Big Data, Knowledge and Control Systems Engineering (BdKCSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Company Industry Classification with Neural and Attention-Based Learning Models\",\"authors\":\"S. Slavov, Andrey Tagarev, Nikola Tulechki, S. Boytcheva\",\"doi\":\"10.1109/BdKCSE48644.2019.9010667\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper compares different solutions for the task of classifying companies with an industry classification scheme. Recent advances in deep learning methods show better performance in the text classification task. The dataset consists of short textual descriptions of companies and their economic activities. Target classification schemes are built by mapping related open data in a semi-controlled manner. Target classes are built from the bottom up by DBpedia. For the experiments are used modifications of methods BERT, XLNet, Glove and ULMfit with pre-trained models for English. Two simple models with perceptron architecture are used as the baseline. The results show that the best performance for multi-label classification of DBpedia companies abstracts is achieved by BERT and XLnet models, even for unbalanced classes.\",\"PeriodicalId\":206080,\"journal\":{\"name\":\"2019 Big Data, Knowledge and Control Systems Engineering (BdKCSE)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 Big Data, Knowledge and Control Systems Engineering (BdKCSE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/BdKCSE48644.2019.9010667\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Big Data, Knowledge and Control Systems Engineering (BdKCSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BdKCSE48644.2019.9010667","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Company Industry Classification with Neural and Attention-Based Learning Models
This paper compares different solutions for the task of classifying companies with an industry classification scheme. Recent advances in deep learning methods show better performance in the text classification task. The dataset consists of short textual descriptions of companies and their economic activities. Target classification schemes are built by mapping related open data in a semi-controlled manner. Target classes are built from the bottom up by DBpedia. For the experiments are used modifications of methods BERT, XLNet, Glove and ULMfit with pre-trained models for English. Two simple models with perceptron architecture are used as the baseline. The results show that the best performance for multi-label classification of DBpedia companies abstracts is achieved by BERT and XLnet models, even for unbalanced classes.