Jarana Manotumruksa, Jeffrey Dalton, E. Meij, Emine Yilmaz
{"title":"CrossBERT:一种用于实体属性排序的三重神经结构","authors":"Jarana Manotumruksa, Jeffrey Dalton, E. Meij, Emine Yilmaz","doi":"10.1145/3397271.3401265","DOIUrl":null,"url":null,"abstract":"Task-based Virtual Personal Assistants (VPAs) such as the Google Assistant, Alexa, and Siri are increasingly being adopted for a wide variety of tasks. These tasks are grounded in real-world entities and actions (e.g., book a hotel, organise a conference, or requesting funds). In this work we tackle the task of automatically constructing actionable knowledge graphs in response to a user query in order to support a wider variety of increasingly complex assistant tasks. We frame this as an entity property ranking task given a user query with annotated properties. We propose a new method for property ranking, CrossBERT. CrossBERT builds on the Bidirectional Encoder Representations from Transformers (BERT) and creates a new triplet network structure on cross query-property pairs that is used to rank properties. We also study the impact of using external evidence for query entities from textual entity descriptions. We perform experiments on two standard benchmark collections, the NTCIR-13 Actionable Knowledge Graph Generation (AKGG) task and Entity Property Identification (EPI) task. The results demonstrate that CrossBERT significantly outperforms the best performing runs from AKGG and EPI, as well as previous state-of-the-art BERT-based models. In particular, CrossBERT significantly improves Recall and NDCG by approximately 2-12% over the BERT models across the two used datasets.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"CrossBERT: A Triplet Neural Architecture for Ranking Entity Properties\",\"authors\":\"Jarana Manotumruksa, Jeffrey Dalton, E. Meij, Emine Yilmaz\",\"doi\":\"10.1145/3397271.3401265\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Task-based Virtual Personal Assistants (VPAs) such as the Google Assistant, Alexa, and Siri are increasingly being adopted for a wide variety of tasks. These tasks are grounded in real-world entities and actions (e.g., book a hotel, organise a conference, or requesting funds). In this work we tackle the task of automatically constructing actionable knowledge graphs in response to a user query in order to support a wider variety of increasingly complex assistant tasks. We frame this as an entity property ranking task given a user query with annotated properties. We propose a new method for property ranking, CrossBERT. CrossBERT builds on the Bidirectional Encoder Representations from Transformers (BERT) and creates a new triplet network structure on cross query-property pairs that is used to rank properties. We also study the impact of using external evidence for query entities from textual entity descriptions. We perform experiments on two standard benchmark collections, the NTCIR-13 Actionable Knowledge Graph Generation (AKGG) task and Entity Property Identification (EPI) task. The results demonstrate that CrossBERT significantly outperforms the best performing runs from AKGG and EPI, as well as previous state-of-the-art BERT-based models. In particular, CrossBERT significantly improves Recall and NDCG by approximately 2-12% over the BERT models across the two used datasets.\",\"PeriodicalId\":252050,\"journal\":{\"name\":\"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-07-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3397271.3401265\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3397271.3401265","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
CrossBERT: A Triplet Neural Architecture for Ranking Entity Properties
Task-based Virtual Personal Assistants (VPAs) such as the Google Assistant, Alexa, and Siri are increasingly being adopted for a wide variety of tasks. These tasks are grounded in real-world entities and actions (e.g., book a hotel, organise a conference, or requesting funds). In this work we tackle the task of automatically constructing actionable knowledge graphs in response to a user query in order to support a wider variety of increasingly complex assistant tasks. We frame this as an entity property ranking task given a user query with annotated properties. We propose a new method for property ranking, CrossBERT. CrossBERT builds on the Bidirectional Encoder Representations from Transformers (BERT) and creates a new triplet network structure on cross query-property pairs that is used to rank properties. We also study the impact of using external evidence for query entities from textual entity descriptions. We perform experiments on two standard benchmark collections, the NTCIR-13 Actionable Knowledge Graph Generation (AKGG) task and Entity Property Identification (EPI) task. The results demonstrate that CrossBERT significantly outperforms the best performing runs from AKGG and EPI, as well as previous state-of-the-art BERT-based models. In particular, CrossBERT significantly improves Recall and NDCG by approximately 2-12% over the BERT models across the two used datasets.