Can you moderate an unreadable message? 'Blind' content moderation via human computation

Seth Frey, M. Bos, R. Sumner
{"title":"Can you moderate an unreadable message? 'Blind' content moderation via human computation","authors":"Seth Frey, M. Bos, R. Sumner","doi":"10.15346/HC.V4I1.5","DOIUrl":null,"url":null,"abstract":"User-generated content (UGC) is fundamental to online social engagement, but eliciting and managing it come with many challenges. The special features of UGC moderation highlight many of the general challenges of human computation in general. They also emphasize how moderation and privacy interact: people have rights to both privacy and safety online, but it is difficult to provide one without violating the other: scanning a user's inbox for potentially malicious messages seems to imply access to all safe ones as well. Are privacy and safety opposed, or is it possible in some circumstance to guarantee the safety of anonymous content without access to that content. We demonstrate that such \"blind content moderation\" is possible in certain domains. Additionally, the methods we introduce offer safety guarantees, an expressive content space, and require no human moderation load: they are safe, expressive, and scalable Though it may seem preposterous to try moderating UGC without human- or machine-level access to it, human computation makes blind moderation possible. We establish this existence claim by defining two very different human computational methods, behavioral thresholding and reverse correlation . Each leverages the statistical and behavioral properties of so-called \"inappropriate content\" in different decision settings to moderate UGC without access to a message's meaning or intention. The first, behavioral thresholding, is shown to generalize the well-known ESP game.","PeriodicalId":92785,"journal":{"name":"Human computation (Fairfax, Va.)","volume":"30 1","pages":"78-106"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human computation (Fairfax, Va.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.15346/HC.V4I1.5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

User-generated content (UGC) is fundamental to online social engagement, but eliciting and managing it come with many challenges. The special features of UGC moderation highlight many of the general challenges of human computation in general. They also emphasize how moderation and privacy interact: people have rights to both privacy and safety online, but it is difficult to provide one without violating the other: scanning a user's inbox for potentially malicious messages seems to imply access to all safe ones as well. Are privacy and safety opposed, or is it possible in some circumstance to guarantee the safety of anonymous content without access to that content. We demonstrate that such "blind content moderation" is possible in certain domains. Additionally, the methods we introduce offer safety guarantees, an expressive content space, and require no human moderation load: they are safe, expressive, and scalable Though it may seem preposterous to try moderating UGC without human- or machine-level access to it, human computation makes blind moderation possible. We establish this existence claim by defining two very different human computational methods, behavioral thresholding and reverse correlation . Each leverages the statistical and behavioral properties of so-called "inappropriate content" in different decision settings to moderate UGC without access to a message's meaning or intention. The first, behavioral thresholding, is shown to generalize the well-known ESP game.
你能修改一条不可读的信息吗?通过人工计算的“盲目”内容审核
用户生成内容(UGC)是在线社交参与的基础,但吸引和管理它面临许多挑战。UGC审核的特殊功能突出了人类计算的许多一般挑战。他们还强调节制和隐私是如何相互作用的:人们在网上既有隐私权又有安全的权利,但很难在不侵犯另一个的情况下提供其中一个:扫描用户收件箱中潜在的恶意信息似乎意味着也可以访问所有安全的邮件。隐私和安全是对立的,或者在某些情况下是否可以保证匿名内容的安全而不访问该内容。我们证明这种“盲目内容审核”在某些领域是可能的。此外,我们引入的方法提供了安全保证,一个富有表现力的内容空间,并且不需要人工审核负载:它们是安全的,富有表现力的,可伸缩的。尽管在没有人类或机器级别访问的情况下尝试审核UGC似乎有些荒谬,但人工计算使盲目审核成为可能。我们通过定义两种非常不同的人类计算方法,行为阈值和反向相关来建立这种存在主张。在不同的决策环境中,每个人都利用所谓的“不恰当内容”的统计和行为属性来调节UGC,而无需访问消息的含义或意图。第一种,行为阈值,被证明可以推广众所周知的ESP游戏。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信