Wei Tang, Fenglong Su, Haifeng Sun, Q. Qi, Jingyu Wang, Shimin Tao, Hao Yang
{"title":"Weakly Supervised Entity Alignment with Positional Inspiration","authors":"Wei Tang, Fenglong Su, Haifeng Sun, Q. Qi, Jingyu Wang, Shimin Tao, Hao Yang","doi":"10.1145/3539597.3570394","DOIUrl":null,"url":null,"abstract":"The current success of entity alignment (EA) is still mainly based on large-scale labeled anchor links. However, the refined annotation of anchor links still consumes a lot of manpower and material resources. As a result, an increasing number of works based on active learning, few-shot learning, or other deep network learning techniques have been developed to address the performance bottleneck caused by a lack of labeled data. These works focus either on the strategy of choosing more informative labeled data or on the strategy of model training, while it remains opaque why existing popular EA models (e.g., GNN-based models) fail the EA task with limited labeled data. To overcome this issue, this paper analyzes the problem of weakly supervised EA from the perspective of model design and proposes a novel weakly supervised learning framework, Position Enhanced Entity Alignment (PEEA). Besides absorbing structural and relational information, PEEA aims to increase the connections between far-away entities and labeled ones by incorporating positional information into the representation learning with a Position Attention Layer (PAL). To fully utilize the limited anchor links, we further introduce a novel position encoding method that considers both anchor links and relational information from a global view. The proposed position encoding will be fed into PEEA as additional entity features. Extensive experiments on public datasets demonstrate the effectiveness of PEEA.","PeriodicalId":227804,"journal":{"name":"Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3539597.3570394","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The current success of entity alignment (EA) is still mainly based on large-scale labeled anchor links. However, the refined annotation of anchor links still consumes a lot of manpower and material resources. As a result, an increasing number of works based on active learning, few-shot learning, or other deep network learning techniques have been developed to address the performance bottleneck caused by a lack of labeled data. These works focus either on the strategy of choosing more informative labeled data or on the strategy of model training, while it remains opaque why existing popular EA models (e.g., GNN-based models) fail the EA task with limited labeled data. To overcome this issue, this paper analyzes the problem of weakly supervised EA from the perspective of model design and proposes a novel weakly supervised learning framework, Position Enhanced Entity Alignment (PEEA). Besides absorbing structural and relational information, PEEA aims to increase the connections between far-away entities and labeled ones by incorporating positional information into the representation learning with a Position Attention Layer (PAL). To fully utilize the limited anchor links, we further introduce a novel position encoding method that considers both anchor links and relational information from a global view. The proposed position encoding will be fed into PEEA as additional entity features. Extensive experiments on public datasets demonstrate the effectiveness of PEEA.