Fake news detection using machine learning: an adversarial collaboration approach

IF 5.9 3区 管理学 Q1 BUSINESS
Karen M. DSouza, Aaron M. French
{"title":"Fake news detection using machine learning: an adversarial collaboration approach","authors":"Karen M. DSouza, Aaron M. French","doi":"10.1108/intr-03-2022-0176","DOIUrl":null,"url":null,"abstract":"<h3>Purpose</h3>\n<p>Purveyors of fake news perpetuate information that can harm society, including businesses. Social media's reach quickly amplifies distortions of fake news. Research has not yet fully explored the mechanisms of such adversarial behavior or the adversarial techniques of machine learning that might be deployed to detect fake news. Debiasing techniques are also explored to combat against the generation of fake news using adversarial data. The purpose of this paper is to present the challenges and opportunities in fake news detection.</p><!--/ Abstract__block -->\n<h3>Design/methodology/approach</h3>\n<p>First, this paper provides an overview of adversarial behaviors and current machine learning techniques. Next, it describes the use of long short-term memory (LSTM) to identify fake news in a corpus of articles. Finally, it presents the novel adversarial behavior approach to protect targeted business datasets from attacks.</p><!--/ Abstract__block -->\n<h3>Findings</h3>\n<p>This research highlights the need for a corpus of fake news that can be used to evaluate classification methods. Adversarial debiasing using IBM's Artificial Intelligence Fairness 360 (AIF360) toolkit can improve the disparate impact of unfavorable characteristics of a dataset. Debiasing also demonstrates significant potential to reduce fake news generation based on the inherent bias in the data. These findings provide avenues for further research on adversarial collaboration and robust information systems.</p><!--/ Abstract__block -->\n<h3>Originality/value</h3>\n<p>Adversarial debiasing of datasets demonstrates that by reducing bias related to protected attributes, such as sex, race and age, businesses can reduce the potential of exploitation to generate fake news through adversarial data.</p><!--/ Abstract__block -->","PeriodicalId":54925,"journal":{"name":"Internet Research","volume":"6 6","pages":""},"PeriodicalIF":5.9000,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet Research","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1108/intr-03-2022-0176","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose

Purveyors of fake news perpetuate information that can harm society, including businesses. Social media's reach quickly amplifies distortions of fake news. Research has not yet fully explored the mechanisms of such adversarial behavior or the adversarial techniques of machine learning that might be deployed to detect fake news. Debiasing techniques are also explored to combat against the generation of fake news using adversarial data. The purpose of this paper is to present the challenges and opportunities in fake news detection.

Design/methodology/approach

First, this paper provides an overview of adversarial behaviors and current machine learning techniques. Next, it describes the use of long short-term memory (LSTM) to identify fake news in a corpus of articles. Finally, it presents the novel adversarial behavior approach to protect targeted business datasets from attacks.

Findings

This research highlights the need for a corpus of fake news that can be used to evaluate classification methods. Adversarial debiasing using IBM's Artificial Intelligence Fairness 360 (AIF360) toolkit can improve the disparate impact of unfavorable characteristics of a dataset. Debiasing also demonstrates significant potential to reduce fake news generation based on the inherent bias in the data. These findings provide avenues for further research on adversarial collaboration and robust information systems.

Originality/value

Adversarial debiasing of datasets demonstrates that by reducing bias related to protected attributes, such as sex, race and age, businesses can reduce the potential of exploitation to generate fake news through adversarial data.

使用机器学习的假新闻检测:一种对抗性协作方法
目的虚假新闻的提供者使可能危害社会的信息长期存在,包括商业。社交媒体的影响力迅速放大了对假新闻的歪曲。研究尚未充分探索这种对抗性行为的机制,也未充分探索可能用于检测假新闻的机器学习的对抗性技术。还探索了消除偏见技术,以防止使用对抗性数据生成假新闻。本文的目的是介绍假新闻检测的挑战和机遇。设计/方法论/方法首先,本文概述了对抗性行为和当前的机器学习技术。接下来,它描述了使用长短期记忆(LSTM)来识别文章语料库中的假新闻。最后,提出了一种新的对抗性行为方法来保护目标业务数据集免受攻击。发现这项研究强调了对假新闻语料库的需求,该语料库可用于评估分类方法。使用IBM的人工智能公平360(AIF360)工具包进行对抗性去偏可以改善数据集不利特征的不同影响。消除偏见也显示出减少基于数据固有偏见的假新闻生成的巨大潜力。这些发现为进一步研究对抗性协作和强大的信息系统提供了途径。原创性/价值数据集的对抗性去偏倚表明,通过减少与受保护属性(如性别、种族和年龄)相关的偏倚,企业可以减少利用对抗性数据生成假新闻的可能性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Internet Research
Internet Research 工程技术-电信学
CiteScore
11.20
自引率
10.20%
发文量
85
审稿时长
>12 weeks
期刊介绍: This wide-ranging interdisciplinary journal looks at the social, ethical, economic and political implications of the internet. Recent issues have focused on online and mobile gaming, the sharing economy, and the dark side of social media.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信