{"title":"Against Human Content Moderation: Algorithms without Trauma","authors":"Juan Espíndola","doi":"10.1111/japp.70024","DOIUrl":null,"url":null,"abstract":"<p>This paper explores the morality of human content moderation. It focuses on content moderation of Child Sexual Abuse Material (CSAM) as it takes place in commercial digital platforms, broadly understood. I select CSAM for examination because there is a widespread and uncontroversial consensus around the need to remove it, which furnishes the strongest possible argument for human content moderation. The paper makes the case that, even if we grant that social media platforms or chatbots are a valuable—or inevitable— force in current societies, and even if moderation plays an important role in protecting users and society more generally from the detrimental effects of these digital tools, it is far from clear that tasking humans to conduct such moderation is permissible without constraints, given the psychic toll of the practice on moderators. While a blanket prohibition of human moderation would be objectionable, given the benefits of the practice, it behooves us to identify the fundamental interests affected by the harms of human moderation, the obligations that platforms acquire to protect such interests, and the conditions under which their realization is in fact possible. I argue that the failure to comply with certain standards renders human moderation impermissible.</p>","PeriodicalId":47057,"journal":{"name":"Journal of Applied Philosophy","volume":"42 4","pages":"1285-1300"},"PeriodicalIF":0.9000,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/japp.70024","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Applied Philosophy","FirstCategoryId":"98","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/japp.70024","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0
Abstract
This paper explores the morality of human content moderation. It focuses on content moderation of Child Sexual Abuse Material (CSAM) as it takes place in commercial digital platforms, broadly understood. I select CSAM for examination because there is a widespread and uncontroversial consensus around the need to remove it, which furnishes the strongest possible argument for human content moderation. The paper makes the case that, even if we grant that social media platforms or chatbots are a valuable—or inevitable— force in current societies, and even if moderation plays an important role in protecting users and society more generally from the detrimental effects of these digital tools, it is far from clear that tasking humans to conduct such moderation is permissible without constraints, given the psychic toll of the practice on moderators. While a blanket prohibition of human moderation would be objectionable, given the benefits of the practice, it behooves us to identify the fundamental interests affected by the harms of human moderation, the obligations that platforms acquire to protect such interests, and the conditions under which their realization is in fact possible. I argue that the failure to comply with certain standards renders human moderation impermissible.