{"title":"算法巡逻内容:危害在哪里?","authors":"Monica Horten","doi":"10.1080/13600869.2023.2221823","DOIUrl":null,"url":null,"abstract":"At the heart of this paper is an examination of the colloquial concept of a ‘shadow ban’. It reveals ways in which algorithms on the Facebook platform have the effect of suppressing content distribution without specifically targeting it for removal, and examines the consequential stifling of users’ speech. It reveals how the Facebook shadow ban is implemented by blocking dissemination of content in News Feed. The decision-making criteria are based on ‘behaviour’, a term that relates to activity of the page that is identifiable through patterns in the data. It’s a technique that is rooted in computer security, and raises questions about the balance between security and freedom of expression. The paper is situated in the field of responsibility of online platforms for content moderation. It studies the experience of the shadow ban on 20 UK-based Facebook Pages over the period from November 2019 to January 2021. The potential harm was evaluated using human rights standards and a comparative metric produced from Facebook Insights data. The empirical research is connected to recent legislative developments: the EU’s Digital Services Act and the UK’s Online Safety Bill. Its most salient contribution may be around ‘behaviour’ monitoring and its interpretation by legislators.","PeriodicalId":53660,"journal":{"name":"International Review of Law, Computers and Technology","volume":"112 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Algorithms patrolling content: where’s the harm?\",\"authors\":\"Monica Horten\",\"doi\":\"10.1080/13600869.2023.2221823\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"At the heart of this paper is an examination of the colloquial concept of a ‘shadow ban’. It reveals ways in which algorithms on the Facebook platform have the effect of suppressing content distribution without specifically targeting it for removal, and examines the consequential stifling of users’ speech. It reveals how the Facebook shadow ban is implemented by blocking dissemination of content in News Feed. The decision-making criteria are based on ‘behaviour’, a term that relates to activity of the page that is identifiable through patterns in the data. It’s a technique that is rooted in computer security, and raises questions about the balance between security and freedom of expression. The paper is situated in the field of responsibility of online platforms for content moderation. It studies the experience of the shadow ban on 20 UK-based Facebook Pages over the period from November 2019 to January 2021. The potential harm was evaluated using human rights standards and a comparative metric produced from Facebook Insights data. The empirical research is connected to recent legislative developments: the EU’s Digital Services Act and the UK’s Online Safety Bill. Its most salient contribution may be around ‘behaviour’ monitoring and its interpretation by legislators.\",\"PeriodicalId\":53660,\"journal\":{\"name\":\"International Review of Law, Computers and Technology\",\"volume\":\"112 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Review of Law, Computers and Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/13600869.2023.2221823\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Review of Law, Computers and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/13600869.2023.2221823","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
At the heart of this paper is an examination of the colloquial concept of a ‘shadow ban’. It reveals ways in which algorithms on the Facebook platform have the effect of suppressing content distribution without specifically targeting it for removal, and examines the consequential stifling of users’ speech. It reveals how the Facebook shadow ban is implemented by blocking dissemination of content in News Feed. The decision-making criteria are based on ‘behaviour’, a term that relates to activity of the page that is identifiable through patterns in the data. It’s a technique that is rooted in computer security, and raises questions about the balance between security and freedom of expression. The paper is situated in the field of responsibility of online platforms for content moderation. It studies the experience of the shadow ban on 20 UK-based Facebook Pages over the period from November 2019 to January 2021. The potential harm was evaluated using human rights standards and a comparative metric produced from Facebook Insights data. The empirical research is connected to recent legislative developments: the EU’s Digital Services Act and the UK’s Online Safety Bill. Its most salient contribution may be around ‘behaviour’ monitoring and its interpretation by legislators.