{"title":"欧洲人工智能 \"值得全世界信赖\":基于风险的监管与形成具有竞争力的共同人工智能市场","authors":"Regine Paul","doi":"10.1111/rego.12563","DOIUrl":null,"url":null,"abstract":"The European Commission has pioneered the coercive regulation of artificial intelligence (AI), including a proposal of banning some applications altogether on moral grounds. Core to its regulatory strategy is a nominally “risk-based” approach with interventions that are proportionate to risk levels. Yet, neither standard accounts of risk-based regulation as rational problem-solving endeavor nor theories of organizational legitimacy-seeking, both prominently discussed in <i>Regulation & Governance</i>, fully explain the Commission's attraction to the risk heuristic. This article responds to this impasse with three contributions. First, it enrichens risk-based regulation scholarship—beyond AI—with a firm foundation in constructivist and critical political economy accounts of emerging tech regulation to capture the performative politics of defining and enacting risk vis-à-vis global economic competitiveness. Second, it conceptualizes the role of risk analysis within a <i>Cultural Political Economy</i> framework: as a powerful epistemic tool for the discursive and regulatory differentiation of an uncertain regulatory terrain (semiosis and structuration) which the Commission wields in its pursuit of a future common European AI market. Thirdly, the paper offers an in-depth empirical reconstruction of the Commission's risk-based semiosis and structuration in AI regulation through qualitative analysis of a substantive sample of documents and expert interviews. This finds that the Commission's use of risk analysis, outlawing some AI uses as matters of deep value conflicts and tightly controlling (at least discursively) so-called high-risk AI systems, enables Brussels to fashion its desired trademark of European “cutting-edge AI … trusted throughout the world” in the first place.","PeriodicalId":21026,"journal":{"name":"Regulation & Governance","volume":null,"pages":null},"PeriodicalIF":3.2000,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"European artificial intelligence “trusted throughout the world”: Risk-based regulation and the fashioning of a competitive common AI market\",\"authors\":\"Regine Paul\",\"doi\":\"10.1111/rego.12563\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The European Commission has pioneered the coercive regulation of artificial intelligence (AI), including a proposal of banning some applications altogether on moral grounds. Core to its regulatory strategy is a nominally “risk-based” approach with interventions that are proportionate to risk levels. Yet, neither standard accounts of risk-based regulation as rational problem-solving endeavor nor theories of organizational legitimacy-seeking, both prominently discussed in <i>Regulation & Governance</i>, fully explain the Commission's attraction to the risk heuristic. This article responds to this impasse with three contributions. First, it enrichens risk-based regulation scholarship—beyond AI—with a firm foundation in constructivist and critical political economy accounts of emerging tech regulation to capture the performative politics of defining and enacting risk vis-à-vis global economic competitiveness. Second, it conceptualizes the role of risk analysis within a <i>Cultural Political Economy</i> framework: as a powerful epistemic tool for the discursive and regulatory differentiation of an uncertain regulatory terrain (semiosis and structuration) which the Commission wields in its pursuit of a future common European AI market. Thirdly, the paper offers an in-depth empirical reconstruction of the Commission's risk-based semiosis and structuration in AI regulation through qualitative analysis of a substantive sample of documents and expert interviews. This finds that the Commission's use of risk analysis, outlawing some AI uses as matters of deep value conflicts and tightly controlling (at least discursively) so-called high-risk AI systems, enables Brussels to fashion its desired trademark of European “cutting-edge AI … trusted throughout the world” in the first place.\",\"PeriodicalId\":21026,\"journal\":{\"name\":\"Regulation & Governance\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2023-12-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Regulation & Governance\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.1111/rego.12563\",\"RegionNum\":2,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Regulation & Governance","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1111/rego.12563","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
European artificial intelligence “trusted throughout the world”: Risk-based regulation and the fashioning of a competitive common AI market
The European Commission has pioneered the coercive regulation of artificial intelligence (AI), including a proposal of banning some applications altogether on moral grounds. Core to its regulatory strategy is a nominally “risk-based” approach with interventions that are proportionate to risk levels. Yet, neither standard accounts of risk-based regulation as rational problem-solving endeavor nor theories of organizational legitimacy-seeking, both prominently discussed in Regulation & Governance, fully explain the Commission's attraction to the risk heuristic. This article responds to this impasse with three contributions. First, it enrichens risk-based regulation scholarship—beyond AI—with a firm foundation in constructivist and critical political economy accounts of emerging tech regulation to capture the performative politics of defining and enacting risk vis-à-vis global economic competitiveness. Second, it conceptualizes the role of risk analysis within a Cultural Political Economy framework: as a powerful epistemic tool for the discursive and regulatory differentiation of an uncertain regulatory terrain (semiosis and structuration) which the Commission wields in its pursuit of a future common European AI market. Thirdly, the paper offers an in-depth empirical reconstruction of the Commission's risk-based semiosis and structuration in AI regulation through qualitative analysis of a substantive sample of documents and expert interviews. This finds that the Commission's use of risk analysis, outlawing some AI uses as matters of deep value conflicts and tightly controlling (at least discursively) so-called high-risk AI systems, enables Brussels to fashion its desired trademark of European “cutting-edge AI … trusted throughout the world” in the first place.
期刊介绍:
Regulation & Governance serves as the leading platform for the study of regulation and governance by political scientists, lawyers, sociologists, historians, criminologists, psychologists, anthropologists, economists and others. Research on regulation and governance, once fragmented across various disciplines and subject areas, has emerged at the cutting edge of paradigmatic change in the social sciences. Through the peer-reviewed journal Regulation & Governance, we seek to advance discussions between various disciplines about regulation and governance, promote the development of new theoretical and empirical understanding, and serve the growing needs of practitioners for a useful academic reference.