Hate speech’s double damage: A semi-automated approach toward direct and indirect targets

Mario Haim, E.v. Hoven
{"title":"Hate speech’s double damage: A semi-automated approach toward direct and indirect targets","authors":"Mario Haim, E.v. Hoven","doi":"10.51685/jqd.2022.009","DOIUrl":null,"url":null,"abstract":"Democracies around the world have been facing increasing challenges with hate speech online as it contributes to a tense and thus less discursive public sphere. In that, hate speech online targets free speech both directly and indirectly, through harassments and explicit harm as well as by informing a vicious environment of irrationality, misrepresentation, or disrespect. Consequently, platforms have implemented varying means of comment-moderation techniques, depending both on policy regulations and on the quantity and quality of hate speech online. This study seeks to provide descriptive measures between direct and indirect targets in light of different incentives and practices of moderation on both social media and news outlets. Based on three distinct samples from German Twitter, YouTube, and a set of four news outlets, it applies semi-automated content analyses using a set of five cross-sample classifiers. Thereby, the largest amounts of visible hate speech online depict rather implicit devaluations of ideas or behavior. More explicit forms of hate speech online, such as insult, slander, or vulgarity, are only rarely observable and accumulate around certain events (Twitter) or single videos (YouTube). Moreover, while hate speech on Twitter and YouTube tends to target particular groups or individuals, hate speech below news articles shows a stronger focus on debates. Potential reasons and implications are discussed in light of political and legal efforts in Germany.","PeriodicalId":93587,"journal":{"name":"Journal of quantitative description: digital media","volume":"17 3 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of quantitative description: digital media","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.51685/jqd.2022.009","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Democracies around the world have been facing increasing challenges with hate speech online as it contributes to a tense and thus less discursive public sphere. In that, hate speech online targets free speech both directly and indirectly, through harassments and explicit harm as well as by informing a vicious environment of irrationality, misrepresentation, or disrespect. Consequently, platforms have implemented varying means of comment-moderation techniques, depending both on policy regulations and on the quantity and quality of hate speech online. This study seeks to provide descriptive measures between direct and indirect targets in light of different incentives and practices of moderation on both social media and news outlets. Based on three distinct samples from German Twitter, YouTube, and a set of four news outlets, it applies semi-automated content analyses using a set of five cross-sample classifiers. Thereby, the largest amounts of visible hate speech online depict rather implicit devaluations of ideas or behavior. More explicit forms of hate speech online, such as insult, slander, or vulgarity, are only rarely observable and accumulate around certain events (Twitter) or single videos (YouTube). Moreover, while hate speech on Twitter and YouTube tends to target particular groups or individuals, hate speech below news articles shows a stronger focus on debates. Potential reasons and implications are discussed in light of political and legal efforts in Germany.
仇恨言论的双重伤害:针对直接和间接目标的半自动化方法
世界各地的民主国家都面临着越来越多的网络仇恨言论的挑战,因为它导致了一个紧张的公共领域,从而减少了话语。在这种情况下,网上的仇恨言论直接或间接地针对言论自由,通过骚扰和明确的伤害,以及通过营造一个非理性、失实陈述或不尊重的恶性环境。因此,平台根据政策法规和网上仇恨言论的数量和质量,实施了不同的评论节制技术手段。本研究旨在根据社交媒体和新闻媒体上不同的激励和节制做法,提供直接和间接目标之间的描述性措施。基于来自德国Twitter、YouTube和一组四家新闻媒体的三个不同样本,它使用一组五个跨样本分类器应用半自动内容分析。因此,网上可见的最大量仇恨言论相当含蓄地贬低了思想或行为。更明确的网络仇恨言论形式,如侮辱、诽谤或粗俗言论,很少被观察到,并围绕某些事件(Twitter)或单个视频(YouTube)积累。此外,Twitter和YouTube上的仇恨言论往往针对特定群体或个人,而新闻文章下方的仇恨言论则更侧重于辩论。根据德国的政治和法律努力,讨论了潜在的原因和影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信