Zhongyu Zhuang , Ziran Liang , Yanghui Rao , Haoran Xie , Fu Lee Wang
{"title":"基于阅读理解机制的词汇外嵌入学习","authors":"Zhongyu Zhuang , Ziran Liang , Yanghui Rao , Haoran Xie , Fu Lee Wang","doi":"10.1016/j.nlp.2023.100038","DOIUrl":null,"url":null,"abstract":"<div><p>Currently, most natural language processing tasks use word embeddings as the representation of words. However, when encountering out-of-vocabulary (OOV) words, the performance of downstream models that use word embeddings as input is often quite limited. To solve this problem, the latest methods mainly infer the meaning of OOV words based on two types of information sources: the morphological structure of OOV words and the contexts in which they appear. However, the low frequency of OOV words themselves usually makes them difficult to learn in pre-training tasks by general word embedding models. In addition, this characteristic of OOV word embedding learning also brings the problem of context scarcity. Therefore, we introduce the concept of “similar contexts” based on the classical “distributed hypothesis” in linguistics, by borrowing from the human reading comprehension mechanisms to make up for the deficiency of insufficient contexts in previous OOV word embedding learning work. The experimental results show that our model achieved the highest relative scores in both intrinsic and extrinsic evaluation tasks, which demonstrates the positive effect of the “similar contexts” introduced in our model on OOV word embedding learning.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"5 ","pages":"Article 100038"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719123000353/pdfft?md5=3f3a30c80249e275dc7207c19446fc4a&pid=1-s2.0-S2949719123000353-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Out-of-vocabulary word embedding learning based on reading comprehension mechanism\",\"authors\":\"Zhongyu Zhuang , Ziran Liang , Yanghui Rao , Haoran Xie , Fu Lee Wang\",\"doi\":\"10.1016/j.nlp.2023.100038\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Currently, most natural language processing tasks use word embeddings as the representation of words. However, when encountering out-of-vocabulary (OOV) words, the performance of downstream models that use word embeddings as input is often quite limited. To solve this problem, the latest methods mainly infer the meaning of OOV words based on two types of information sources: the morphological structure of OOV words and the contexts in which they appear. However, the low frequency of OOV words themselves usually makes them difficult to learn in pre-training tasks by general word embedding models. In addition, this characteristic of OOV word embedding learning also brings the problem of context scarcity. Therefore, we introduce the concept of “similar contexts” based on the classical “distributed hypothesis” in linguistics, by borrowing from the human reading comprehension mechanisms to make up for the deficiency of insufficient contexts in previous OOV word embedding learning work. The experimental results show that our model achieved the highest relative scores in both intrinsic and extrinsic evaluation tasks, which demonstrates the positive effect of the “similar contexts” introduced in our model on OOV word embedding learning.</p></div>\",\"PeriodicalId\":100944,\"journal\":{\"name\":\"Natural Language Processing Journal\",\"volume\":\"5 \",\"pages\":\"Article 100038\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-10-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2949719123000353/pdfft?md5=3f3a30c80249e275dc7207c19446fc4a&pid=1-s2.0-S2949719123000353-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Natural Language Processing Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949719123000353\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Natural Language Processing Journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949719123000353","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Out-of-vocabulary word embedding learning based on reading comprehension mechanism
Currently, most natural language processing tasks use word embeddings as the representation of words. However, when encountering out-of-vocabulary (OOV) words, the performance of downstream models that use word embeddings as input is often quite limited. To solve this problem, the latest methods mainly infer the meaning of OOV words based on two types of information sources: the morphological structure of OOV words and the contexts in which they appear. However, the low frequency of OOV words themselves usually makes them difficult to learn in pre-training tasks by general word embedding models. In addition, this characteristic of OOV word embedding learning also brings the problem of context scarcity. Therefore, we introduce the concept of “similar contexts” based on the classical “distributed hypothesis” in linguistics, by borrowing from the human reading comprehension mechanisms to make up for the deficiency of insufficient contexts in previous OOV word embedding learning work. The experimental results show that our model achieved the highest relative scores in both intrinsic and extrinsic evaluation tasks, which demonstrates the positive effect of the “similar contexts” introduced in our model on OOV word embedding learning.