Pedro Azevedo, Gil Rocha, Diego Esteves, Henrique Lopes Cardoso
{"title":"Towards Better Evidence Extraction Methods for Fact-Checking Systems","authors":"Pedro Azevedo, Gil Rocha, Diego Esteves, Henrique Lopes Cardoso","doi":"10.1145/3486622.3493930","DOIUrl":null,"url":null,"abstract":"Given current levels of misinformation spread, never before have fact-checking frameworks been so critical. Unfortunately, the performance of Automated Fact-checking systems is still poor due to the complexity of the task. In this paper, we present an ablation study of a framework submitted to the FEVER 1.0 challenge. Based on our findings, we explore how triple-based information retrieval, coreference resolution, and recent language model representations can impact the performance of each subtask. We show the importance of recall and precision in the retrieval of documents and sentences that can be provided to justify the veracity of a given claim. We reach state-of-the-art results in the Document Retrieval task and we show promising results when using coreference resolution to improve the Sentence Retrieval task.","PeriodicalId":89230,"journal":{"name":"Proceedings. IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"62 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3486622.3493930","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Given current levels of misinformation spread, never before have fact-checking frameworks been so critical. Unfortunately, the performance of Automated Fact-checking systems is still poor due to the complexity of the task. In this paper, we present an ablation study of a framework submitted to the FEVER 1.0 challenge. Based on our findings, we explore how triple-based information retrieval, coreference resolution, and recent language model representations can impact the performance of each subtask. We show the importance of recall and precision in the retrieval of documents and sentences that can be provided to justify the veracity of a given claim. We reach state-of-the-art results in the Document Retrieval task and we show promising results when using coreference resolution to improve the Sentence Retrieval task.