Observe, inspect, modify: Three conditions for generative AI governance

Fabian Ferrari, José van Dijck, Antal van den Bosch
{"title":"Observe, inspect, modify: Three conditions for generative AI governance","authors":"Fabian Ferrari, José van Dijck, Antal van den Bosch","doi":"10.1177/14614448231214811","DOIUrl":null,"url":null,"abstract":"In a world increasingly shaped by generative AI systems like ChatGPT, the absence of benchmarks to examine the efficacy of oversight mechanisms is a problem for research and policy. What are the structural conditions for governing generative AI systems? To answer this question, it is crucial to situate generative AI systems as regulatory objects: material items that can be governed. On this conceptual basis, we introduce three high-level conditions to structure research and policy agendas on generative AI governance: industrial observability, public inspectability, and technical modifiability. Empirically, we explicate those conditions with a focus on the EU’s AI Act, grounding the analysis of oversight mechanisms for generative AI systems in their granular material properties as observable, inspectable, and modifiable objects. Those three conditions represent an action plan to help us perceive generative AI systems as negotiable objects, rather than seeing them as mysterious forces that pose existential risks for humanity.","PeriodicalId":443328,"journal":{"name":"New Media & Society","volume":"40 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"New Media & Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/14614448231214811","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In a world increasingly shaped by generative AI systems like ChatGPT, the absence of benchmarks to examine the efficacy of oversight mechanisms is a problem for research and policy. What are the structural conditions for governing generative AI systems? To answer this question, it is crucial to situate generative AI systems as regulatory objects: material items that can be governed. On this conceptual basis, we introduce three high-level conditions to structure research and policy agendas on generative AI governance: industrial observability, public inspectability, and technical modifiability. Empirically, we explicate those conditions with a focus on the EU’s AI Act, grounding the analysis of oversight mechanisms for generative AI systems in their granular material properties as observable, inspectable, and modifiable objects. Those three conditions represent an action plan to help us perceive generative AI systems as negotiable objects, rather than seeing them as mysterious forces that pose existential risks for humanity.
观察、检查、修改:生成式人工智能治理的三个条件
在像 ChatGPT 这样的生成式人工智能系统越来越多地塑造的世界里,缺乏检验监督机制有效性的基准是研究和政策面临的一个问题。管理生成式人工智能系统的结构条件是什么?要回答这个问题,关键是要将生成式人工智能系统定位为监管对象:可被监管的物质项目。在此概念基础上,我们提出了三个高层次条件,以构建关于生成式人工智能治理的研究和政策议程:工业可观察性、公共可检查性和技术可修改性。从经验上讲,我们以欧盟的《人工智能法》为重点来阐释这些条件,将对生成式人工智能系统监督机制的分析建立在其作为可观察、可检查和可修改对象的细粒度物质属性的基础上。这三个条件代表了一项行动计划,帮助我们将人工智能生成系统视为可协商的对象,而不是将其视为对人类生存构成威胁的神秘力量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信