Ben Wagner , Matthias C. Kettemann , Anna Sophia Tiedeke , Felicitas Rachinger , Marie-Therese Sekwenz
{"title":"Mapping interpretations of the law in online content moderation in Germany","authors":"Ben Wagner , Matthias C. Kettemann , Anna Sophia Tiedeke , Felicitas Rachinger , Marie-Therese Sekwenz","doi":"10.1016/j.clsr.2024.106054","DOIUrl":null,"url":null,"abstract":"<div><div>Content moderation is a vital condition that online platforms must facilitate, according to the law, to create suitable online environments for their users. By the law, we mean national or European laws that require the removal of content by online platforms, such as EU Regulation 2021/784, which addresses the dissemination of terrorist content online. Content moderation required by these national or European laws, summarised here as ‘the law’, is different from the moderation of pieces of content that is not directly required by law but instead is conducted voluntarily by the platforms. New regulatory requests create an additional layer of complexity of legal grounds for the moderation of content and are relevant to platforms’ daily decisions. The decisions made are either grounded in reasons stemming from different sources of law, such as international or national provisions, or can be based on contractual grounds, such as the platform's Terms of Service and Community Standards. However, how to empirically measure these essential aspects of content moderation remains unclear. Therefore, we ask the following research question: How do online platforms interpret the law when they moderate online content?</div><div>To understand this complex interplay and empirically test the quality of a platform's content moderation claims, this article develops a methodology that facilitates empirical evidence of the individual decisions taken per piece of content while highlighting the subjective element of content classification by human moderators. We then apply this methodology to a single empirical case, an anonymous medium-sized German platform that provided us access to their content moderation decisions. With more knowledge of how platforms interpret the law, we can better understand the complex nature of content moderation, its regulation and compliance practices, and to what degree legal moderation might differ from moderation due to contractual reasons in dimensions such as the need for context, information, and time.</div><div>Our results show considerable divergence between the platform's interpretation of the law and ours. We believe that a significant number of platform legal interpretations are incorrect due to divergent interpretations of the law and that platforms are removing legal content that they falsely believe to be illegal (‘overblocking’) while simultaneously not moderating illegal content (‘underblocking’). In conclusion, we provide recommendations for content moderation system design that takes (legal) human content moderation into account and creates new methodological ways to test its quality and effect on speech in online platforms.</div></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"55 ","pages":"Article 106054"},"PeriodicalIF":3.3000,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Law & Security Review","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0267364924001201","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0
Abstract
Content moderation is a vital condition that online platforms must facilitate, according to the law, to create suitable online environments for their users. By the law, we mean national or European laws that require the removal of content by online platforms, such as EU Regulation 2021/784, which addresses the dissemination of terrorist content online. Content moderation required by these national or European laws, summarised here as ‘the law’, is different from the moderation of pieces of content that is not directly required by law but instead is conducted voluntarily by the platforms. New regulatory requests create an additional layer of complexity of legal grounds for the moderation of content and are relevant to platforms’ daily decisions. The decisions made are either grounded in reasons stemming from different sources of law, such as international or national provisions, or can be based on contractual grounds, such as the platform's Terms of Service and Community Standards. However, how to empirically measure these essential aspects of content moderation remains unclear. Therefore, we ask the following research question: How do online platforms interpret the law when they moderate online content?
To understand this complex interplay and empirically test the quality of a platform's content moderation claims, this article develops a methodology that facilitates empirical evidence of the individual decisions taken per piece of content while highlighting the subjective element of content classification by human moderators. We then apply this methodology to a single empirical case, an anonymous medium-sized German platform that provided us access to their content moderation decisions. With more knowledge of how platforms interpret the law, we can better understand the complex nature of content moderation, its regulation and compliance practices, and to what degree legal moderation might differ from moderation due to contractual reasons in dimensions such as the need for context, information, and time.
Our results show considerable divergence between the platform's interpretation of the law and ours. We believe that a significant number of platform legal interpretations are incorrect due to divergent interpretations of the law and that platforms are removing legal content that they falsely believe to be illegal (‘overblocking’) while simultaneously not moderating illegal content (‘underblocking’). In conclusion, we provide recommendations for content moderation system design that takes (legal) human content moderation into account and creates new methodological ways to test its quality and effect on speech in online platforms.
期刊介绍:
CLSR publishes refereed academic and practitioner papers on topics such as Web 2.0, IT security, Identity management, ID cards, RFID, interference with privacy, Internet law, telecoms regulation, online broadcasting, intellectual property, software law, e-commerce, outsourcing, data protection, EU policy, freedom of information, computer security and many other topics. In addition it provides a regular update on European Union developments, national news from more than 20 jurisdictions in both Europe and the Pacific Rim. It is looking for papers within the subject area that display good quality legal analysis and new lines of legal thought or policy development that go beyond mere description of the subject area, however accurate that may be.