Caio Libânio Melo Jerônimo, L. Marinho, Cclaudio E.C. Carmpelo, Adriano Veloso, A. S. C. Melo
{"title":"基于主体性词汇的假新闻特征分析","authors":"Caio Libânio Melo Jerônimo, L. Marinho, Cclaudio E.C. Carmpelo, Adriano Veloso, A. S. C. Melo","doi":"10.26421/JDI1.4-2","DOIUrl":null,"url":null,"abstract":"While many works investigate spread patterns of fake news in social networks, we focus on the textual content. Instead of relying on syntactic representations of documents (aka Bag of Words) as many works do, we seek more robust representations that may better differentiate fake from legitimate news. We propose to consider the subjectivity of news under the assumption that the subjectivity levels of legitimate and fake news are significantly different. For computing the subjectivity level of news, we rely on a set subjectivity lexicons for both Brazilian Portuguese and English languages. We then build subjectivity feature vectors for each news article by calculating the Word Mover's Distance (WMD) between the news and these lexicons considering the embedding the news words lie in, in order to analyze and classify the documents. The results demonstrate that our method is robust, especially in scenarios where training and test domains are different.","PeriodicalId":232625,"journal":{"name":"J. Data Intell.","volume":"156 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Characterization of Fake News Based on Subjectivity Lexicons\",\"authors\":\"Caio Libânio Melo Jerônimo, L. Marinho, Cclaudio E.C. Carmpelo, Adriano Veloso, A. S. C. Melo\",\"doi\":\"10.26421/JDI1.4-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"While many works investigate spread patterns of fake news in social networks, we focus on the textual content. Instead of relying on syntactic representations of documents (aka Bag of Words) as many works do, we seek more robust representations that may better differentiate fake from legitimate news. We propose to consider the subjectivity of news under the assumption that the subjectivity levels of legitimate and fake news are significantly different. For computing the subjectivity level of news, we rely on a set subjectivity lexicons for both Brazilian Portuguese and English languages. We then build subjectivity feature vectors for each news article by calculating the Word Mover's Distance (WMD) between the news and these lexicons considering the embedding the news words lie in, in order to analyze and classify the documents. The results demonstrate that our method is robust, especially in scenarios where training and test domains are different.\",\"PeriodicalId\":232625,\"journal\":{\"name\":\"J. Data Intell.\",\"volume\":\"156 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"J. Data Intell.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.26421/JDI1.4-2\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"J. Data Intell.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.26421/JDI1.4-2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Characterization of Fake News Based on Subjectivity Lexicons
While many works investigate spread patterns of fake news in social networks, we focus on the textual content. Instead of relying on syntactic representations of documents (aka Bag of Words) as many works do, we seek more robust representations that may better differentiate fake from legitimate news. We propose to consider the subjectivity of news under the assumption that the subjectivity levels of legitimate and fake news are significantly different. For computing the subjectivity level of news, we rely on a set subjectivity lexicons for both Brazilian Portuguese and English languages. We then build subjectivity feature vectors for each news article by calculating the Word Mover's Distance (WMD) between the news and these lexicons considering the embedding the news words lie in, in order to analyze and classify the documents. The results demonstrate that our method is robust, especially in scenarios where training and test domains are different.