{"title":"Algorithmic Regulation in Media and Cultural Policy: A Framework to Evaluate Barriers to Accountability","authors":"Robert Hunt,Fenwick McKelvey","doi":"10.5325/jinfopoli.9.1.0307","DOIUrl":null,"url":null,"abstract":"Abstract The word “algorithm” is best understood as a generic term for automated decision-making. Algorithms can be coded by humans or they can become self-taught through machine learning. Cultural goods and news increasingly pass through information intermediaries known as platforms that rely on algorithms to filter, rank, sort, classify, and promote information. Algorithmic content recommendation acts as an important and increasingly contentious gatekeeper. Numerous controversies around the nature of content being recommended—from disturbing children's videos to conspiracies and political misinformation—have undermined confidence in the neutrality of these systems. Amid a generational challenge for media policy, algorithmic accountability has emerged as one area of regulatory innovation. Algorithmic accountability seeks to explain automated decision-making, ultimately locating responsibility and improving the overall system. This article focuses on the technical, systemic issues related to algorithmic accountability, highlighting that deployment matters as much as development when explaining algorithmic outcomes. After outlining the challenges faced by those seeking to enact algorithmic accountability, we conclude by comparing some emerging approaches to addressing cultural discoverability by different international policymakers.","PeriodicalId":55617,"journal":{"name":"Journal of Information Policy","volume":"97 4","pages":"307-335"},"PeriodicalIF":1.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Policy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5325/jinfopoli.9.1.0307","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMMUNICATION","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract The word “algorithm” is best understood as a generic term for automated decision-making. Algorithms can be coded by humans or they can become self-taught through machine learning. Cultural goods and news increasingly pass through information intermediaries known as platforms that rely on algorithms to filter, rank, sort, classify, and promote information. Algorithmic content recommendation acts as an important and increasingly contentious gatekeeper. Numerous controversies around the nature of content being recommended—from disturbing children's videos to conspiracies and political misinformation—have undermined confidence in the neutrality of these systems. Amid a generational challenge for media policy, algorithmic accountability has emerged as one area of regulatory innovation. Algorithmic accountability seeks to explain automated decision-making, ultimately locating responsibility and improving the overall system. This article focuses on the technical, systemic issues related to algorithmic accountability, highlighting that deployment matters as much as development when explaining algorithmic outcomes. After outlining the challenges faced by those seeking to enact algorithmic accountability, we conclude by comparing some emerging approaches to addressing cultural discoverability by different international policymakers.