Simin Hu, Boyue Wang, Jiapu Wang, Yujian Ma, Lan Zhao
{"title":"基于转换器的时态知识图补全","authors":"Simin Hu, Boyue Wang, Jiapu Wang, Yujian Ma, Lan Zhao","doi":"10.1109/CCAI57533.2023.10201286","DOIUrl":null,"url":null,"abstract":"A structured semantic knowledge base called a temporal knowledge graph contains several quadruple facts that change throughout time. To infer missing facts is one of the main challenges with temporal knowledge graph, i.e., temporal knowledge graph completion (TKGC). Transformer has strong modeling abilities across a variety of domains since its self-attention mechanism makes it possible to model the global dependencies of input sequences, while few studies explore Transformer encoders for TKGC tasks. To address this problem, we propose a novel end-to-end TKGC model named Transbe-TuckERTT that adopts an encoder-decoder architecture. Specifically, t he proposed model employs the Transformer-based encoder to facilitate interaction between entities, relations, and temporal information within the quadruple to generate highly expressive embeddings. The TuckERTT decoder uses encoded embeddings to predict missing facts in the knowledge graph. Experimental results demonstrate that our proposed model outperforms several state-of-the-art TKGC methods on three public benchmark datasets, verifying the effectiveness of the self-attention mechanism in the Transformer-based encoder for capturing dependencies in the temporal knowledge graph.","PeriodicalId":285760,"journal":{"name":"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Transformer-based Temporal Knowledge Graph Completion\",\"authors\":\"Simin Hu, Boyue Wang, Jiapu Wang, Yujian Ma, Lan Zhao\",\"doi\":\"10.1109/CCAI57533.2023.10201286\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A structured semantic knowledge base called a temporal knowledge graph contains several quadruple facts that change throughout time. To infer missing facts is one of the main challenges with temporal knowledge graph, i.e., temporal knowledge graph completion (TKGC). Transformer has strong modeling abilities across a variety of domains since its self-attention mechanism makes it possible to model the global dependencies of input sequences, while few studies explore Transformer encoders for TKGC tasks. To address this problem, we propose a novel end-to-end TKGC model named Transbe-TuckERTT that adopts an encoder-decoder architecture. Specifically, t he proposed model employs the Transformer-based encoder to facilitate interaction between entities, relations, and temporal information within the quadruple to generate highly expressive embeddings. The TuckERTT decoder uses encoded embeddings to predict missing facts in the knowledge graph. Experimental results demonstrate that our proposed model outperforms several state-of-the-art TKGC methods on three public benchmark datasets, verifying the effectiveness of the self-attention mechanism in the Transformer-based encoder for capturing dependencies in the temporal knowledge graph.\",\"PeriodicalId\":285760,\"journal\":{\"name\":\"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCAI57533.2023.10201286\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCAI57533.2023.10201286","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A structured semantic knowledge base called a temporal knowledge graph contains several quadruple facts that change throughout time. To infer missing facts is one of the main challenges with temporal knowledge graph, i.e., temporal knowledge graph completion (TKGC). Transformer has strong modeling abilities across a variety of domains since its self-attention mechanism makes it possible to model the global dependencies of input sequences, while few studies explore Transformer encoders for TKGC tasks. To address this problem, we propose a novel end-to-end TKGC model named Transbe-TuckERTT that adopts an encoder-decoder architecture. Specifically, t he proposed model employs the Transformer-based encoder to facilitate interaction between entities, relations, and temporal information within the quadruple to generate highly expressive embeddings. The TuckERTT decoder uses encoded embeddings to predict missing facts in the knowledge graph. Experimental results demonstrate that our proposed model outperforms several state-of-the-art TKGC methods on three public benchmark datasets, verifying the effectiveness of the self-attention mechanism in the Transformer-based encoder for capturing dependencies in the temporal knowledge graph.