欧洲人工智能 "值得全世界信赖":基于风险的监管与形成具有竞争力的共同人工智能市场

IF 3.2 2区 社会学 Q1 LAW
Regine Paul
{"title":"欧洲人工智能 \"值得全世界信赖\":基于风险的监管与形成具有竞争力的共同人工智能市场","authors":"Regine Paul","doi":"10.1111/rego.12563","DOIUrl":null,"url":null,"abstract":"The European Commission has pioneered the coercive regulation of artificial intelligence (AI), including a proposal of banning some applications altogether on moral grounds. Core to its regulatory strategy is a nominally “risk-based” approach with interventions that are proportionate to risk levels. Yet, neither standard accounts of risk-based regulation as rational problem-solving endeavor nor theories of organizational legitimacy-seeking, both prominently discussed in <i>Regulation &amp; Governance</i>, fully explain the Commission's attraction to the risk heuristic. This article responds to this impasse with three contributions. First, it enrichens risk-based regulation scholarship—beyond AI—with a firm foundation in constructivist and critical political economy accounts of emerging tech regulation to capture the performative politics of defining and enacting risk vis-à-vis global economic competitiveness. Second, it conceptualizes the role of risk analysis within a <i>Cultural Political Economy</i> framework: as a powerful epistemic tool for the discursive and regulatory differentiation of an uncertain regulatory terrain (semiosis and structuration) which the Commission wields in its pursuit of a future common European AI market. Thirdly, the paper offers an in-depth empirical reconstruction of the Commission's risk-based semiosis and structuration in AI regulation through qualitative analysis of a substantive sample of documents and expert interviews. This finds that the Commission's use of risk analysis, outlawing some AI uses as matters of deep value conflicts and tightly controlling (at least discursively) so-called high-risk AI systems, enables Brussels to fashion its desired trademark of European “cutting-edge AI … trusted throughout the world” in the first place.","PeriodicalId":21026,"journal":{"name":"Regulation & Governance","volume":null,"pages":null},"PeriodicalIF":3.2000,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"European artificial intelligence “trusted throughout the world”: Risk-based regulation and the fashioning of a competitive common AI market\",\"authors\":\"Regine Paul\",\"doi\":\"10.1111/rego.12563\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The European Commission has pioneered the coercive regulation of artificial intelligence (AI), including a proposal of banning some applications altogether on moral grounds. Core to its regulatory strategy is a nominally “risk-based” approach with interventions that are proportionate to risk levels. Yet, neither standard accounts of risk-based regulation as rational problem-solving endeavor nor theories of organizational legitimacy-seeking, both prominently discussed in <i>Regulation &amp; Governance</i>, fully explain the Commission's attraction to the risk heuristic. This article responds to this impasse with three contributions. First, it enrichens risk-based regulation scholarship—beyond AI—with a firm foundation in constructivist and critical political economy accounts of emerging tech regulation to capture the performative politics of defining and enacting risk vis-à-vis global economic competitiveness. Second, it conceptualizes the role of risk analysis within a <i>Cultural Political Economy</i> framework: as a powerful epistemic tool for the discursive and regulatory differentiation of an uncertain regulatory terrain (semiosis and structuration) which the Commission wields in its pursuit of a future common European AI market. Thirdly, the paper offers an in-depth empirical reconstruction of the Commission's risk-based semiosis and structuration in AI regulation through qualitative analysis of a substantive sample of documents and expert interviews. This finds that the Commission's use of risk analysis, outlawing some AI uses as matters of deep value conflicts and tightly controlling (at least discursively) so-called high-risk AI systems, enables Brussels to fashion its desired trademark of European “cutting-edge AI … trusted throughout the world” in the first place.\",\"PeriodicalId\":21026,\"journal\":{\"name\":\"Regulation & Governance\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2023-12-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Regulation & Governance\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.1111/rego.12563\",\"RegionNum\":2,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Regulation & Governance","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1111/rego.12563","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0

摘要

欧盟委员会率先对人工智能(AI)进行强制监管,包括提议以道德为由完全禁止某些应用。其监管战略的核心是一种名义上 "基于风险 "的方法,干预措施与风险水平成正比。然而,无论是将基于风险的监管视为理性解决问题的努力,还是《监管与amp; 治理》一书中突出讨论的组织合法性追求理论,都不能充分解释委员会对风险启发式的吸引力。本文从三个方面对这一僵局做出了回应。首先,除了人工智能之外,本文还丰富了基于风险的监管学术,为新兴科技监管的建构主义和批判政治经济学论述奠定了坚实的基础,从而捕捉到定义和制定风险与全球经济竞争力之间的表演性政治。其次,本文在文化政治经济学框架内对风险分析的作用进行了概念化:将其作为一种强大的认识论工具,用于对不确定的监管领域(符号学和结构化)进行话语和监管区分,而欧盟委员会正是利用这一工具来追求未来的欧洲共同人工智能市场。第三,本文通过对大量文件样本和专家访谈的定性分析,对委员会在人工智能监管中基于风险的符号学和结构化进行了深入的实证重建。本文发现,委员会利用风险分析,将某些人工智能用途作为深层价值冲突事项予以取缔,并(至少在话语上)严格控制所谓的高风险人工智能系统,使布鲁塞尔能够首先塑造其所期望的欧洲 "尖端人工智能......在全世界受到信任 "的商标。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
European artificial intelligence “trusted throughout the world”: Risk-based regulation and the fashioning of a competitive common AI market
The European Commission has pioneered the coercive regulation of artificial intelligence (AI), including a proposal of banning some applications altogether on moral grounds. Core to its regulatory strategy is a nominally “risk-based” approach with interventions that are proportionate to risk levels. Yet, neither standard accounts of risk-based regulation as rational problem-solving endeavor nor theories of organizational legitimacy-seeking, both prominently discussed in Regulation & Governance, fully explain the Commission's attraction to the risk heuristic. This article responds to this impasse with three contributions. First, it enrichens risk-based regulation scholarship—beyond AI—with a firm foundation in constructivist and critical political economy accounts of emerging tech regulation to capture the performative politics of defining and enacting risk vis-à-vis global economic competitiveness. Second, it conceptualizes the role of risk analysis within a Cultural Political Economy framework: as a powerful epistemic tool for the discursive and regulatory differentiation of an uncertain regulatory terrain (semiosis and structuration) which the Commission wields in its pursuit of a future common European AI market. Thirdly, the paper offers an in-depth empirical reconstruction of the Commission's risk-based semiosis and structuration in AI regulation through qualitative analysis of a substantive sample of documents and expert interviews. This finds that the Commission's use of risk analysis, outlawing some AI uses as matters of deep value conflicts and tightly controlling (at least discursively) so-called high-risk AI systems, enables Brussels to fashion its desired trademark of European “cutting-edge AI … trusted throughout the world” in the first place.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
7.80
自引率
10.00%
发文量
57
期刊介绍: Regulation & Governance serves as the leading platform for the study of regulation and governance by political scientists, lawyers, sociologists, historians, criminologists, psychologists, anthropologists, economists and others. Research on regulation and governance, once fragmented across various disciplines and subject areas, has emerged at the cutting edge of paradigmatic change in the social sciences. Through the peer-reviewed journal Regulation & Governance, we seek to advance discussions between various disciplines about regulation and governance, promote the development of new theoretical and empirical understanding, and serve the growing needs of practitioners for a useful academic reference.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信