Do Machines Replicate Humans? Toward a Unified Understanding of Radicalizing Content on the Open Social Web.

IF 4.1 1区 文学 Q1 COMMUNICATION
Policy and Internet Pub Date : 2020-03-01 Epub Date: 2019-09-26 DOI:10.1002/poi3.223
Margeret Hall, Michael Logan, Gina S Ligon, Douglas C Derrick
{"title":"Do Machines Replicate Humans? Toward a Unified Understanding of Radicalizing Content on the Open Social Web.","authors":"Margeret Hall, Michael Logan, Gina S Ligon, Douglas C Derrick","doi":"10.1002/poi3.223","DOIUrl":null,"url":null,"abstract":"<p><p>The advent of the Internet inadvertently augmented the functioning and success of violent extremist organizations. Terrorist organizations like the Islamic State in Iraq and Syria (ISIS) use the Internet to project their message to a global audience. The majority of research and practice on web-based terrorist propaganda uses human coders to classify content, raising serious concerns such as burnout, mental stress, and reliability of the coded data. More recently, technology platforms and researchers have started to examine the online content using automated classification procedures. However, there are questions about the robustness of automated procedures, given insufficient research comparing and contextualizing the difference between human and machine coding. This article compares output of three text analytics packages with that of human coders on a sample of one hundred nonindexed web pages associated with ISIS. We find that prevalent topics (e.g., holy war) are accurately detected by the three packages whereas nuanced concepts (Lone Wolf attacks) are generally missed. Our findings suggest that naïve approaches of standard applications do not approximate human understanding, and therefore consumption, of radicalizing content. Before radicalizing content can be automatically detected, we need a closer approximation to human understanding.</p>","PeriodicalId":46894,"journal":{"name":"Policy and Internet","volume":null,"pages":null},"PeriodicalIF":4.1000,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/poi3.223","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Policy and Internet","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1002/poi3.223","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2019/9/26 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMMUNICATION","Score":null,"Total":0}
引用次数: 10

Abstract

The advent of the Internet inadvertently augmented the functioning and success of violent extremist organizations. Terrorist organizations like the Islamic State in Iraq and Syria (ISIS) use the Internet to project their message to a global audience. The majority of research and practice on web-based terrorist propaganda uses human coders to classify content, raising serious concerns such as burnout, mental stress, and reliability of the coded data. More recently, technology platforms and researchers have started to examine the online content using automated classification procedures. However, there are questions about the robustness of automated procedures, given insufficient research comparing and contextualizing the difference between human and machine coding. This article compares output of three text analytics packages with that of human coders on a sample of one hundred nonindexed web pages associated with ISIS. We find that prevalent topics (e.g., holy war) are accurately detected by the three packages whereas nuanced concepts (Lone Wolf attacks) are generally missed. Our findings suggest that naïve approaches of standard applications do not approximate human understanding, and therefore consumption, of radicalizing content. Before radicalizing content can be automatically detected, we need a closer approximation to human understanding.

机器能复制人类吗?统一理解开放式社交网络上的激进内容
互联网的出现无意中增强了暴力极端组织的运作和成功。像伊拉克和叙利亚伊斯兰国(ISIS)这样的恐怖组织利用互联网向全球受众传播他们的信息。大多数关于基于网络的恐怖主义宣传的研究和实践都使用人类编码人员对内容进行分类,这引起了严重的担忧,如倦怠、精神压力和编码数据的可靠性。最近,技术平台和研究人员开始使用自动分类程序检查在线内容。然而,由于对人类和机器编码之间的差异进行比较和背景化的研究不足,关于自动化程序的鲁棒性存在问题。本文将三个文本分析包的输出与人类编码人员在100个与ISIS相关的非索引网页上的输出进行比较。我们发现流行的话题(例如,圣战)被三个包准确地检测到,而微妙的概念(孤狼攻击)通常被遗漏。我们的研究结果表明,naïve标准应用程序的方法不接近人类的理解,因此消费,激进的内容。在自动检测激进内容之前,我们需要更接近人类的理解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
8.40
自引率
10.20%
发文量
51
期刊介绍: Understanding public policy in the age of the Internet requires understanding how individuals, organizations, governments and networks behave, and what motivates them in this new environment. Technological innovation and internet-mediated interaction raise both challenges and opportunities for public policy: whether in areas that have received much work already (e.g. digital divides, digital government, and privacy) or newer areas, like regulation of data-intensive technologies and platforms, the rise of precarious labour, and regulatory responses to misinformation and hate speech. We welcome innovative research in areas where the Internet already impacts public policy, where it raises new challenges or dilemmas, or provides opportunities for policy that is smart and equitable. While we welcome perspectives from any academic discipline, we look particularly for insight that can feed into social science disciplines like political science, public administration, economics, sociology, and communication. We welcome articles that introduce methodological innovation, theoretical development, or rigorous data analysis concerning a particular question or problem of public policy.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信