{"title":"语言表征学习模式的比较研究","authors":"Sanae Achsas, E. Nfaoui","doi":"10.1145/3419604.3419773","DOIUrl":null,"url":null,"abstract":"Recently, Natural Language Processing has shown significant development, especially in text mining and analysis. An important task in this area is learning vector-space representations of text. Since various machine learning algorithms require representing their inputs in a vector format. In this paper, we highlight the most important language representation learning models used in the literature, ranging from the free contextual approaches like word2vec and Glove until the appearance of recent modern contextualized approaches such as ELMo, BERT, and XLNet. We show and discuss their main architectures and their main strengths and limits.","PeriodicalId":250715,"journal":{"name":"Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications","volume":"103 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Language representation learning models: A comparative study\",\"authors\":\"Sanae Achsas, E. Nfaoui\",\"doi\":\"10.1145/3419604.3419773\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, Natural Language Processing has shown significant development, especially in text mining and analysis. An important task in this area is learning vector-space representations of text. Since various machine learning algorithms require representing their inputs in a vector format. In this paper, we highlight the most important language representation learning models used in the literature, ranging from the free contextual approaches like word2vec and Glove until the appearance of recent modern contextualized approaches such as ELMo, BERT, and XLNet. We show and discuss their main architectures and their main strengths and limits.\",\"PeriodicalId\":250715,\"journal\":{\"name\":\"Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications\",\"volume\":\"103 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3419604.3419773\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3419604.3419773","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Language representation learning models: A comparative study
Recently, Natural Language Processing has shown significant development, especially in text mining and analysis. An important task in this area is learning vector-space representations of text. Since various machine learning algorithms require representing their inputs in a vector format. In this paper, we highlight the most important language representation learning models used in the literature, ranging from the free contextual approaches like word2vec and Glove until the appearance of recent modern contextualized approaches such as ELMo, BERT, and XLNet. We show and discuss their main architectures and their main strengths and limits.