Conceptualizing Visual Analytic Interventions for Content Moderation

Sahaj Vaidya, Jie Cai, Soumyadeep Basu, Azadeh Naderi, D. Y. Wohn, Aritra Dasgupta
{"title":"Conceptualizing Visual Analytic Interventions for Content Moderation","authors":"Sahaj Vaidya, Jie Cai, Soumyadeep Basu, Azadeh Naderi, D. Y. Wohn, Aritra Dasgupta","doi":"10.1109/VIS49827.2021.9623288","DOIUrl":null,"url":null,"abstract":"Modern social media platforms like Twitch, YouTube, etc., embody an open space for content creation and consumption. However, an unintended consequence of such content democratization is the proliferation of toxicity and abuse that content creators get subjected to. Commercial and volunteer content moderators play an indispensable role in identifying bad actors and minimizing the scale and degree of harmful content. Moderation tasks are often laborious, complex, and even if semi-automated, they involve high-consequence human decisions that affect the safety and popular perception of the platforms. In this paper, through an interdisciplinary collaboration among researchers from social science, human-computer interaction, and visualization, we present a systematic understanding of how visual analytics can help in human-in-the-loop content moderation. We contribute a characterization of the data-driven problems and needs for proactive moderation and present a mapping between the needs and visual analytic tasks through a task abstraction framework. We discuss how the task abstraction framework can be used for transparent moderation, design interventions for moderators’ well-being, and ultimately, for creating futuristic human-machine interfaces for data-driven content moderation.","PeriodicalId":387572,"journal":{"name":"2021 IEEE Visualization Conference (VIS)","volume":"337 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Visualization Conference (VIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VIS49827.2021.9623288","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Modern social media platforms like Twitch, YouTube, etc., embody an open space for content creation and consumption. However, an unintended consequence of such content democratization is the proliferation of toxicity and abuse that content creators get subjected to. Commercial and volunteer content moderators play an indispensable role in identifying bad actors and minimizing the scale and degree of harmful content. Moderation tasks are often laborious, complex, and even if semi-automated, they involve high-consequence human decisions that affect the safety and popular perception of the platforms. In this paper, through an interdisciplinary collaboration among researchers from social science, human-computer interaction, and visualization, we present a systematic understanding of how visual analytics can help in human-in-the-loop content moderation. We contribute a characterization of the data-driven problems and needs for proactive moderation and present a mapping between the needs and visual analytic tasks through a task abstraction framework. We discuss how the task abstraction framework can be used for transparent moderation, design interventions for moderators’ well-being, and ultimately, for creating futuristic human-machine interfaces for data-driven content moderation.
概念化视觉分析干预内容审核
现代社交媒体平台,如Twitch、YouTube等,体现了内容创作和消费的开放空间。然而,这种内容民主化的一个意想不到的后果是内容创作者受到的毒害和滥用的扩散。商业和志愿内容审核员在识别不良行为者和最小化有害内容的规模和程度方面发挥着不可或缺的作用。审核任务通常是费力而复杂的,即使是半自动化的,它们也涉及影响平台安全性和大众看法的重大人工决策。在本文中,通过社会科学、人机交互和可视化研究人员之间的跨学科合作,我们对可视化分析如何帮助人类在循环中的内容审核提出了系统的理解。我们对数据驱动的问题和主动调节的需求进行了表征,并通过任务抽象框架提出了需求和可视化分析任务之间的映射。我们讨论了任务抽象框架如何用于透明的审核,为审查员的福祉设计干预措施,并最终为数据驱动的内容审核创建未来的人机界面。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信