{"title":"Evaluation of Vector Transformations for Russian Word2Vec and FastText Embeddings","authors":"Olga Korogodina, Olesya Karpik, E. Klyshinsky","doi":"10.51130/graphicon-2020-2-3-18","DOIUrl":null,"url":null,"abstract":"Authors of Word2Vec claimed that their technology could solve the word analogy problem using the vector transformation in the introduced vector space. However, the practice demonstrates that it is not always true. In this paper, we investigate several Word2Vec and FastText model trained for the Russian language and find out reasons of such inconsistency. We found out that different types of words are demonstrating different behavior in the semantic space. FastText vectors are tending to find phonological analogies, while Word2Vec vectors are better in finding relations in geographical proper names. However, we found out that just four out of fifteen selected domains are demonstrating accuracy more that 0.8. We also draw a conclusion that in a common case, the task of word analogies could not be solved using a random word pair taken from two investigated categories. Our experiments have demonstrated that in some cases the length of the vectors could differ more than twice. Calculation of an average vector leads to a better solution here since it closer to more vectors.","PeriodicalId":344054,"journal":{"name":"Proceedings of the 30th International Conference on Computer Graphics and Machine Vision (GraphiCon 2020). Part 2","volume":"75 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 30th International Conference on Computer Graphics and Machine Vision (GraphiCon 2020). Part 2","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.51130/graphicon-2020-2-3-18","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Authors of Word2Vec claimed that their technology could solve the word analogy problem using the vector transformation in the introduced vector space. However, the practice demonstrates that it is not always true. In this paper, we investigate several Word2Vec and FastText model trained for the Russian language and find out reasons of such inconsistency. We found out that different types of words are demonstrating different behavior in the semantic space. FastText vectors are tending to find phonological analogies, while Word2Vec vectors are better in finding relations in geographical proper names. However, we found out that just four out of fifteen selected domains are demonstrating accuracy more that 0.8. We also draw a conclusion that in a common case, the task of word analogies could not be solved using a random word pair taken from two investigated categories. Our experiments have demonstrated that in some cases the length of the vectors could differ more than twice. Calculation of an average vector leads to a better solution here since it closer to more vectors.