Understanding content moderation systems: new methods to understand internet governance at scale, over time, and across platforms

Nicolas Suzor
{"title":"Understanding content moderation systems: new methods to understand internet governance at scale, over time, and across platforms","authors":"Nicolas Suzor","doi":"10.4337/9781788977456.00013","DOIUrl":null,"url":null,"abstract":"There is increasing global concern about how the decisions of internet and telecommunications companies impact on human rights. As a key priority, if we care about how intermediaries govern their networks, we need to be able to measure their impact on human rights and work out how we can use this information to help protect them from external pressures that would limit our freedom and how we can hold them accountable for decisions they make on their own initiatives. Understanding the effects that technology companies have on our lives and identifying potential biases and other problems requires careful attention to the inputs and outputs of these systems and how they actually work in different social contexts. Analysis of this type will require large-scale access to data on individual decisions as well as deep qualitative analyses of the automated and human processes that platforms deploy internally. This chapter presents the ‘Platform Governance Observatory’: new research infrastructure that was designed to enable the systematic study of content moderation practices on major social media platforms. The core research question that made this infrastructure necessary was to understand, at a large scale, what social media content is moderated and by whom, how this compares between platforms, and how this changes over time. So far, this infrastructure enables the analysis of content removals of public posts on YouTube, Twitter, and Instagram. The infrastructure I created to support this exploration proceeds on a general design principle of building a random sample of public social media content, and then tests the availability of that sample at a later date.","PeriodicalId":145445,"journal":{"name":"Computational Legal Studies","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Legal Studies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4337/9781788977456.00013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

There is increasing global concern about how the decisions of internet and telecommunications companies impact on human rights. As a key priority, if we care about how intermediaries govern their networks, we need to be able to measure their impact on human rights and work out how we can use this information to help protect them from external pressures that would limit our freedom and how we can hold them accountable for decisions they make on their own initiatives. Understanding the effects that technology companies have on our lives and identifying potential biases and other problems requires careful attention to the inputs and outputs of these systems and how they actually work in different social contexts. Analysis of this type will require large-scale access to data on individual decisions as well as deep qualitative analyses of the automated and human processes that platforms deploy internally. This chapter presents the ‘Platform Governance Observatory’: new research infrastructure that was designed to enable the systematic study of content moderation practices on major social media platforms. The core research question that made this infrastructure necessary was to understand, at a large scale, what social media content is moderated and by whom, how this compares between platforms, and how this changes over time. So far, this infrastructure enables the analysis of content removals of public posts on YouTube, Twitter, and Instagram. The infrastructure I created to support this exploration proceeds on a general design principle of building a random sample of public social media content, and then tests the availability of that sample at a later date.
理解内容审核系统:在规模、时间和跨平台上理解互联网治理的新方法
全球越来越关注互联网和电信公司的决定对人权的影响。作为一个关键的优先事项,如果我们关心中介机构如何管理他们的网络,我们需要能够衡量他们对人权的影响,并找出我们如何利用这些信息来帮助保护他们免受可能限制我们自由的外部压力,以及我们如何让他们对自己主动做出的决定负责。理解科技公司对我们生活的影响,识别潜在的偏见和其他问题,需要仔细关注这些系统的输入和输出,以及它们在不同社会背景下的实际工作方式。这种类型的分析将需要对个人决策的大规模数据访问,以及对平台内部部署的自动化和人工流程进行深入的定性分析。本章介绍了“平台治理观察站”:新的研究基础设施,旨在对主要社交媒体平台上的内容审核实践进行系统研究。使这种基础设施成为必要的核心研究问题是,大规模地了解哪些社交媒体内容是由谁审核的,平台之间的比较如何,以及这种情况如何随着时间的推移而变化。到目前为止,该基础设施支持对YouTube、Twitter和Instagram上公开帖子的内容删除进行分析。我为支持这一探索而创建的基础设施是基于构建公共社交媒体内容的随机样本的一般设计原则,然后在稍后的日期测试该样本的可用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信