{"title":"Veritas Annotator: Discovering the Origin of a Rumour","authors":"Lucas Azevedo, Mohamed Moustafa","doi":"10.18653/v1/D19-6614","DOIUrl":"https://doi.org/10.18653/v1/D19-6614","url":null,"abstract":"Defined as the intentional or unintentionalspread of false information (K et al., 2019)through context and/or content manipulation,fake news has become one of the most seriousproblems associated with online information(Waldrop, 2017). Consequently, it comes asno surprise that Fake News Detection hasbecome one of the major foci of variousfields of machine learning and while machinelearning models have allowed individualsand companies to automate decision-basedprocesses that were once thought to be onlydoable by humans, it is no secret that thereal-life applications of such models are notviable without the existence of an adequatetraining dataset. In this paper we describethe Veritas Annotator, a web application formanually identifying the origin of a rumour.These rumours, often referred as claims,were previously checked for validity byFact-Checking Agencies.","PeriodicalId":153447,"journal":{"name":"Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133517095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Relation Extraction among Multiple Entities Using a Dual Pointer Network with a Multi-Head Attention Mechanism","authors":"Seongsik Park, H. Kim","doi":"10.18653/v1/D19-6608","DOIUrl":"https://doi.org/10.18653/v1/D19-6608","url":null,"abstract":"Many previous studies on relation extrac-tion have been focused on finding only one relation between two entities in a single sentence. However, we can easily find the fact that multiple entities exist in a single sentence and the entities form multiple relations. To resolve this prob-lem, we propose a relation extraction model based on a dual pointer network with a multi-head attention mechanism. The proposed model finds n-to-1 subject-object relations by using a forward de-coder called an object decoder. Then, it finds 1-to-n subject-object relations by using a backward decoder called a sub-ject decoder. In the experiments with the ACE-05 dataset and the NYT dataset, the proposed model achieved the state-of-the-art performances (F1-score of 80.5% in the ACE-05 dataset, F1-score of 78.3% in the NYT dataset)","PeriodicalId":153447,"journal":{"name":"Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121010652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aimée Alonso-Reina, Robiert Sepúlveda-Torres, E. Saquete, M. Palomar
{"title":"Team GPLSI. Approach for automated fact checking","authors":"Aimée Alonso-Reina, Robiert Sepúlveda-Torres, E. Saquete, M. Palomar","doi":"10.18653/v1/D19-6617","DOIUrl":"https://doi.org/10.18653/v1/D19-6617","url":null,"abstract":"Fever Shared 2.0 Task is a challenge meant for developing automated fact checking systems. Our approach for the Fever 2.0 is based on a previous proposal developed by Team Athene UKP TU Darmstadt. Our proposal modifies the sentence retrieval phase, using statement extraction and representation in the form of triplets (subject, object, action). Triplets are extracted from the claim and compare to triplets extracted from Wikipedia articles using semantic similarity. Our results are satisfactory but there is room for improvement.","PeriodicalId":153447,"journal":{"name":"Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126938033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive Evidence Detection: train state-of-the-art model out-of-domain or simple model interactively?","authors":"C. Stahlhut","doi":"10.18653/v1/D19-6613","DOIUrl":"https://doi.org/10.18653/v1/D19-6613","url":null,"abstract":"Finding evidence is of vital importance in research as well as fact checking and an evidence detection method would be useful in speeding up this process. However, when addressing a new topic there is no training data and there are two approaches to get started. One could use large amounts of out-of-domain data to train a state-of-the-art method, or to use the small data that a person creates while working on the topic. In this paper, we address this problem in two steps. First, by simulating users who read source documents and label sentences they can use as evidence, thereby creating small amounts of training data for an interactively trained evidence detection model; and second, by comparing such an interactively trained model against a pre-trained model that has been trained on large out-of-domain data. We found that an interactively trained model not only often out-performs a state-of-the-art model but also requires significantly lower amounts of computational resources. Therefore, especially when computational resources are scarce, e.g. no GPU available, training a smaller model on the fly is preferable to training a well generalising but resource hungry out-of-domain model.","PeriodicalId":153447,"journal":{"name":"Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133564047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei Fang, Moin Nadeem, Mitra Mohtarami, James R. Glass
{"title":"Neural Multi-Task Learning for Stance Prediction","authors":"Wei Fang, Moin Nadeem, Mitra Mohtarami, James R. Glass","doi":"10.18653/v1/D19-6603","DOIUrl":"https://doi.org/10.18653/v1/D19-6603","url":null,"abstract":"We present a multi-task learning model that leverages large amount of textual information from existing datasets to improve stance prediction. In particular, we utilize multiple NLP tasks under both unsupervised and supervised settings for the target stance prediction task. Our model obtains state-of-the-art performance on a public benchmark dataset, Fake News Challenge, outperforming current approaches by a wide margin.","PeriodicalId":153447,"journal":{"name":"Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115971030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrid Models for Aspects Extraction without Labelled Dataset","authors":"W. Khong, Lay-Ki Soon, Hui-Ngo Goh","doi":"10.18653/v1/D19-6611","DOIUrl":"https://doi.org/10.18653/v1/D19-6611","url":null,"abstract":"One of the important tasks in opinion mining is to extract aspects of the opinion target. Aspects are features or characteristics of the opinion target that are being reviewed, which can be categorised into explicit and implicit aspects. Extracting aspects from opinions is essential in order to ensure accurate information about certain attributes of an opinion target is retrieved. For instance, a professional camera receives a positive feedback in terms of its functionalities in a review, but its overly high price receives negative feedback. Most of the existing solutions focus on explicit aspects. However, sentences in reviews normally do not state the aspects explicitly. In this research, two hybrid models are proposed to identify and extract both explicit and implicit aspects, namely TDM-DC and TDM-TED. The proposed models combine topic modelling and dictionary-based approach. The models are unsupervised as they do not require any labelled dataset. The experimental results show that TDM-DC achieves F1-measure of 58.70%, where it outperforms both the baseline topic model and dictionary-based approach. In comparison to other existing unsupervised techniques, the proposed models are able to achieve higher F1-measure by approximately 3%. Although the supervised techniques perform slightly better, the proposed models are domain-independent, and hence more versatile.","PeriodicalId":153447,"journal":{"name":"Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)","volume":"30 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120885272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
James Thorne, Andreas Vlachos, O. Cocarascu, Christos Christodoulopoulos, Arpit Mittal
{"title":"The FEVER2.0 Shared Task","authors":"James Thorne, Andreas Vlachos, O. Cocarascu, Christos Christodoulopoulos, Arpit Mittal","doi":"10.18653/v1/D19-6601","DOIUrl":"https://doi.org/10.18653/v1/D19-6601","url":null,"abstract":"We present the results of the second Fact Extraction and VERification (FEVER2.0) Shared Task. The task challenged participants to both build systems to verify factoid claims using evidence retrieved from Wikipedia and to generate adversarial attacks against other participant’s systems. The shared task had three phases: building, breaking and fixing. There were 8 systems in the builder’s round, three of which were new qualifying submissions for this shared task, and 5 adversaries generated instances designed to induce classification errors and one builder submitted a fixed system which had higher FEVER score and resilience than their first submission. All but one newly submitted systems attained FEVER scores higher than the best performing system from the first shared task and under adversarial evaluation, all systems exhibited losses in FEVER score. There was a great variety in adversarial attack types as well as the techniques used to generate the attacks, In this paper, we present the results of the shared task and a summary of the systems, highlighting commonalities and innovations among participating systems.","PeriodicalId":153447,"journal":{"name":"Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129605811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scalable Knowledge Graph Construction from Text Collections","authors":"R. Clancy, I. Ilyas, Jimmy J. Lin","doi":"10.18653/v1/D19-6607","DOIUrl":"https://doi.org/10.18653/v1/D19-6607","url":null,"abstract":"We present a scalable, open-source platform that “distills” a potentially large text collection into a knowledge graph. Our platform takes documents stored in Apache Solr and scales out the Stanford CoreNLP toolkit via Apache Spark integration to extract mentions and relations that are then ingested into the Neo4j graph database. The raw knowledge graph is then enriched with facts extracted from an external knowledge graph. The complete product can be manipulated by various applications using Neo4j’s native Cypher query language: We present a subgraph-matching approach to align extracted relations with external facts and show that fact verification, locating textual support for asserted facts, detecting inconsistent and missing facts, and extracting distantly-supervised training data can all be performed within the same framework.","PeriodicalId":153447,"journal":{"name":"Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)","volume":"13 1-4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120963923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FEVER Breaker’s Run of Team NbAuzDrLqg","authors":"Youngwoo Kim, J. Allan","doi":"10.18653/v1/D19-6615","DOIUrl":"https://doi.org/10.18653/v1/D19-6615","url":null,"abstract":"We describe our submission for the Breaker phase of the second Fact Extraction and VERification (FEVER) Shared Task. Our adversarial data can be explained by two perspectives. First, we aimed at testing model’s ability to retrieve evidence, when appropriate query terms could not be easily generated from the claim. Second, we test model’s ability to precisely understand the implications of the texts, which we expect to be rare in FEVER 1.0 dataset. Overall, we suggested six types of adversarial attacks. The evaluation on the submitted systems showed that the systems were only able get both the evidence and label correct in 20% of the data. We also demonstrate our adversarial run analysis in the data development process.","PeriodicalId":153447,"journal":{"name":"Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129297957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Wawer, Grzegorz Wojdyga, Justyna Sarzyńska-Wawer
{"title":"Fact Checking or Psycholinguistics: How to Distinguish Fake and True Claims?","authors":"A. Wawer, Grzegorz Wojdyga, Justyna Sarzyńska-Wawer","doi":"10.18653/v1/D19-6602","DOIUrl":"https://doi.org/10.18653/v1/D19-6602","url":null,"abstract":"The goal of our paper is to compare psycholinguistic text features with fact checking approaches to distinguish lies from true statements. We examine both methods using data from a large ongoing study on deception and deception detection covering a mixture of factual and opinionated topics that polarize public opinion. We conclude that fact checking approaches based on Wikipedia are too limited for this task, as only a few percent of sentences from our study has enough evidence to become supported or refuted. Psycholinguistic features turn out to outperform both fact checking and human baselines, but the accuracy is not high. Overall, it appears that deception detection applicable to less-than-obvious topics is a difficult task and a problem to be solved.","PeriodicalId":153447,"journal":{"name":"Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128352650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}