{"title":"将场景知识整合到事件表示的统一微调体系结构中","authors":"Jianming Zheng, Fei Cai, Honghui Chen","doi":"10.1145/3397271.3401173","DOIUrl":null,"url":null,"abstract":"Given an occurred event, human can easily predict the next event or reason the preceding event, yet which is difficult for machine to perform such event reasoning. Event representation bridges the connection and targets to model the process of event reasoning as a machine-readable format, which then can support a wide range of applications in information retrieval, e.g., question answering and information extraction. Existing work mainly resorts to a joint training to integrate all levels of training loss in event chains by a simple loss summation, which is easily trapped into a local optimum. In addition, the scenario knowledge in event chains is not well investigated for event representation. In this paper, we propose a unified fine-tuning architecture, incorporated with scenario knowledge for event representation, i.e., UniFA-S, which mainly consists of a unified fine-tuning architecture (UniFA) and a scenario-level variational auto-encoder (S-VAE). In detail, UniFA employs a multi-step fine-tuning to integrate all levels of training and S-VAE applies a stochastic variable to implicitly represent the scenario-level knowledge. We evaluate our proposal from two aspects, i.e., the representation and inference abilities. For the representation ability, our ensemble model UniFA-S can beat state-of-the-art baselines for two similarity tasks. For the inference ability, UniFA-S can outperform the best baseline, achieving 4.1%-8.2% improvements in terms of accuracy for various inference tasks.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":"{\"title\":\"Incorporating Scenario Knowledge into A Unified Fine-tuning Architecture for Event Representation\",\"authors\":\"Jianming Zheng, Fei Cai, Honghui Chen\",\"doi\":\"10.1145/3397271.3401173\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Given an occurred event, human can easily predict the next event or reason the preceding event, yet which is difficult for machine to perform such event reasoning. Event representation bridges the connection and targets to model the process of event reasoning as a machine-readable format, which then can support a wide range of applications in information retrieval, e.g., question answering and information extraction. Existing work mainly resorts to a joint training to integrate all levels of training loss in event chains by a simple loss summation, which is easily trapped into a local optimum. In addition, the scenario knowledge in event chains is not well investigated for event representation. In this paper, we propose a unified fine-tuning architecture, incorporated with scenario knowledge for event representation, i.e., UniFA-S, which mainly consists of a unified fine-tuning architecture (UniFA) and a scenario-level variational auto-encoder (S-VAE). In detail, UniFA employs a multi-step fine-tuning to integrate all levels of training and S-VAE applies a stochastic variable to implicitly represent the scenario-level knowledge. We evaluate our proposal from two aspects, i.e., the representation and inference abilities. For the representation ability, our ensemble model UniFA-S can beat state-of-the-art baselines for two similarity tasks. For the inference ability, UniFA-S can outperform the best baseline, achieving 4.1%-8.2% improvements in terms of accuracy for various inference tasks.\",\"PeriodicalId\":252050,\"journal\":{\"name\":\"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-07-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"18\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3397271.3401173\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3397271.3401173","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Incorporating Scenario Knowledge into A Unified Fine-tuning Architecture for Event Representation
Given an occurred event, human can easily predict the next event or reason the preceding event, yet which is difficult for machine to perform such event reasoning. Event representation bridges the connection and targets to model the process of event reasoning as a machine-readable format, which then can support a wide range of applications in information retrieval, e.g., question answering and information extraction. Existing work mainly resorts to a joint training to integrate all levels of training loss in event chains by a simple loss summation, which is easily trapped into a local optimum. In addition, the scenario knowledge in event chains is not well investigated for event representation. In this paper, we propose a unified fine-tuning architecture, incorporated with scenario knowledge for event representation, i.e., UniFA-S, which mainly consists of a unified fine-tuning architecture (UniFA) and a scenario-level variational auto-encoder (S-VAE). In detail, UniFA employs a multi-step fine-tuning to integrate all levels of training and S-VAE applies a stochastic variable to implicitly represent the scenario-level knowledge. We evaluate our proposal from two aspects, i.e., the representation and inference abilities. For the representation ability, our ensemble model UniFA-S can beat state-of-the-art baselines for two similarity tasks. For the inference ability, UniFA-S can outperform the best baseline, achieving 4.1%-8.2% improvements in terms of accuracy for various inference tasks.