{"title":"Toward the Democratic Regulation of AI Systems: A Prolegomenon","authors":"Mariano-Florentino Cuéllar, Aziz Z Huq","doi":"10.2139/ssrn.3671011","DOIUrl":null,"url":null,"abstract":"This essay explores the challenge of regulating “artificial intelligence” (AI) in democracies. We begin with a careful definition of what is being regulated. In contrast to the relatively narrow focus on technical details of computational tools in many discussions about governing AI, we suggest that it is more useful to identify “AI systems” — embedding not only particular algorithms but design choices structuring human-computer interaction and the allocation of responsibility over decisions — as the appropriate object of regulation. We make the case that even in constitutional democracies, regulation of these systems should often depend primarily on how these systems embed forward-looking “policies” and on the social consequences of such policies, rather than on expecting clear answers to deontologically flavored questions about whether these systems violate “rights,” such as those to privacy or non-discrimination. We then canvas some of the challenges associated with carefully-designed, prudent regulation of AI in democracies. We distinguish here between two types of obstacles, each calling for subtly different evaluations. On the one hand, institutional impediments can frustrate an effective democratic response. These sound in the register of political economy, and the path-dependent aspects of regulatory capacity embodied in national and sub-national institutions. On the other hand, ontological impediments can also complicate regulatory response. By this we mean to capture the sense that AI systems can be constitutive of human subjectivity — shaping attitudes, behaviors, and desires — in ways that make the very project of identifying democratic preferences particularly fraught and subject to subversion. As democratic societies contend with the mix of risks and benefits associated with AI systems, candid acknowledgement of the challenges will bring valuable attention to the endogeneity of democratic preferences and to the characteristics of institutions that have an outsized role in shaping how societies evolve.","PeriodicalId":240414,"journal":{"name":"Information Use eJournal","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Use eJournal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3671011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
This essay explores the challenge of regulating “artificial intelligence” (AI) in democracies. We begin with a careful definition of what is being regulated. In contrast to the relatively narrow focus on technical details of computational tools in many discussions about governing AI, we suggest that it is more useful to identify “AI systems” — embedding not only particular algorithms but design choices structuring human-computer interaction and the allocation of responsibility over decisions — as the appropriate object of regulation. We make the case that even in constitutional democracies, regulation of these systems should often depend primarily on how these systems embed forward-looking “policies” and on the social consequences of such policies, rather than on expecting clear answers to deontologically flavored questions about whether these systems violate “rights,” such as those to privacy or non-discrimination. We then canvas some of the challenges associated with carefully-designed, prudent regulation of AI in democracies. We distinguish here between two types of obstacles, each calling for subtly different evaluations. On the one hand, institutional impediments can frustrate an effective democratic response. These sound in the register of political economy, and the path-dependent aspects of regulatory capacity embodied in national and sub-national institutions. On the other hand, ontological impediments can also complicate regulatory response. By this we mean to capture the sense that AI systems can be constitutive of human subjectivity — shaping attitudes, behaviors, and desires — in ways that make the very project of identifying democratic preferences particularly fraught and subject to subversion. As democratic societies contend with the mix of risks and benefits associated with AI systems, candid acknowledgement of the challenges will bring valuable attention to the endogeneity of democratic preferences and to the characteristics of institutions that have an outsized role in shaping how societies evolve.