A. Khobragade, Rushikesh Mahajan, Hrithik Langi, Rohit Mundhe, S. Ghumbre
{"title":"知识图嵌入的有效负三元组采样","authors":"A. Khobragade, Rushikesh Mahajan, Hrithik Langi, Rohit Mundhe, S. Ghumbre","doi":"10.1080/02522667.2022.2133215","DOIUrl":null,"url":null,"abstract":"Abstract Knowledge graphs contain only positive triplet facts, whereas the negative triplets need to be generated precisely to train the embedding models. Early Uniform and Bernoulli sampling are applied but suffer’s from the zero loss problems during training, affecting the performance of embedding models. Recently, generative adversarial technic attended the dynamic negative sampling and obtained better performance by vanishing zero loss but on the adverse side of increasing the model complexity and training parameter. However, NSCaching balances the performance and complexity, generating a single negative triplet sample for each positive triplet that focuses on vanishing gradients. This paper addressed the zero loss training problem due to the low-scored negative triplet by proposing the extended version of NSCaching, to generate the high-scored negative triplet utilized to increase the training performance. The proposed method experimented with semantic matching knowledge graph embedding models on the benchmark datasets, where the results show the success on all evaluation metrics.","PeriodicalId":46518,"journal":{"name":"JOURNAL OF INFORMATION & OPTIMIZATION SCIENCES","volume":"43 1","pages":"2075 - 2087"},"PeriodicalIF":1.1000,"publicationDate":"2022-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Effective negative triplet sampling for knowledge graph embedding\",\"authors\":\"A. Khobragade, Rushikesh Mahajan, Hrithik Langi, Rohit Mundhe, S. Ghumbre\",\"doi\":\"10.1080/02522667.2022.2133215\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Knowledge graphs contain only positive triplet facts, whereas the negative triplets need to be generated precisely to train the embedding models. Early Uniform and Bernoulli sampling are applied but suffer’s from the zero loss problems during training, affecting the performance of embedding models. Recently, generative adversarial technic attended the dynamic negative sampling and obtained better performance by vanishing zero loss but on the adverse side of increasing the model complexity and training parameter. However, NSCaching balances the performance and complexity, generating a single negative triplet sample for each positive triplet that focuses on vanishing gradients. This paper addressed the zero loss training problem due to the low-scored negative triplet by proposing the extended version of NSCaching, to generate the high-scored negative triplet utilized to increase the training performance. The proposed method experimented with semantic matching knowledge graph embedding models on the benchmark datasets, where the results show the success on all evaluation metrics.\",\"PeriodicalId\":46518,\"journal\":{\"name\":\"JOURNAL OF INFORMATION & OPTIMIZATION SCIENCES\",\"volume\":\"43 1\",\"pages\":\"2075 - 2087\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2022-11-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"JOURNAL OF INFORMATION & OPTIMIZATION SCIENCES\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/02522667.2022.2133215\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"JOURNAL OF INFORMATION & OPTIMIZATION SCIENCES","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/02522667.2022.2133215","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
Effective negative triplet sampling for knowledge graph embedding
Abstract Knowledge graphs contain only positive triplet facts, whereas the negative triplets need to be generated precisely to train the embedding models. Early Uniform and Bernoulli sampling are applied but suffer’s from the zero loss problems during training, affecting the performance of embedding models. Recently, generative adversarial technic attended the dynamic negative sampling and obtained better performance by vanishing zero loss but on the adverse side of increasing the model complexity and training parameter. However, NSCaching balances the performance and complexity, generating a single negative triplet sample for each positive triplet that focuses on vanishing gradients. This paper addressed the zero loss training problem due to the low-scored negative triplet by proposing the extended version of NSCaching, to generate the high-scored negative triplet utilized to increase the training performance. The proposed method experimented with semantic matching knowledge graph embedding models on the benchmark datasets, where the results show the success on all evaluation metrics.