AI Impact Assessment: A Policy Prototyping Experiment

Norberto Nuno Gomes de Andrade, V. Kontschieder
{"title":"AI Impact Assessment: A Policy Prototyping Experiment","authors":"Norberto Nuno Gomes de Andrade, V. Kontschieder","doi":"10.2139/ssrn.3772500","DOIUrl":null,"url":null,"abstract":"This report presents the outcomes of the Open Loop policy prototyping program on Automated Decision Impact Assessment (ADIA) in Europe. Open Loop (www.openloop.org) is a global program that connects policymakers and technology companies to help develop effective and evidence-based policies around AI and other emerging technologies. \n \nIn this particular case, Open Loop partnered with 10 European AI companies to co-create an ADIA framework (policy prototype) that those companies could test by applying it to their own \nAI applications. The policy prototype was structured into two parts: the prototype law, which was drafted as legal text, and the prototype guidance, which was drafted as a playbook. The latter provided participants with additional guidance on procedural and substantive aspects of performing the ADIA through: \n \n- A step-by-step risk assessment methodology; \n \n- An overview of values often associated with AI applications; \n \n- A taxonomy of harms; \n \n- Examples of mitigating measures. \n \nThe prototype was tested against the following three \ncriteria: 1) policy understanding; 2) policy effectiveness; 3) policy costs.The goal was to derive evidence-based recommendations relevant to ongoing policy debates around the future of AI regulation. \n \nBased on the results of the prototyping exercise and the feedback on the prototype law and playbook, the report advises lawmakers formulating requirements for AI risk assessments to take the following recommendations into account: \n \n- Focus on procedure instead of prescription as a way to determine high-risk AI applications; \n \n- Leverage a procedural risk assessment approach to determine what is the right set of regulatory requirements that apply to organisations deploying AI applications; \n \n- Provide specific and detailed guidance on how to implement an ADIA process, and release it alongside the law; \n \n- Be as specific as possible in the definition of risks within regulatory scope; \n \n- Improve documentation of risk assessment and decision-making processes by including justifications for mitigation choices; \n \n- Develop a sound taxonomy of the different AI actors involved in risk assessment; \n \n- Specify, as much as possible, the set of values that may be impacted by AI/ADM and provide guidance on how they may be in tension with one another; \n \n- Don’t reinvent the wheel; combine new processes with established ones, improving the overall approach.","PeriodicalId":139603,"journal":{"name":"Libraries & Information Technology eJournal","volume":"634 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Libraries & Information Technology eJournal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3772500","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

Abstract

This report presents the outcomes of the Open Loop policy prototyping program on Automated Decision Impact Assessment (ADIA) in Europe. Open Loop (www.openloop.org) is a global program that connects policymakers and technology companies to help develop effective and evidence-based policies around AI and other emerging technologies. In this particular case, Open Loop partnered with 10 European AI companies to co-create an ADIA framework (policy prototype) that those companies could test by applying it to their own AI applications. The policy prototype was structured into two parts: the prototype law, which was drafted as legal text, and the prototype guidance, which was drafted as a playbook. The latter provided participants with additional guidance on procedural and substantive aspects of performing the ADIA through: - A step-by-step risk assessment methodology; - An overview of values often associated with AI applications; - A taxonomy of harms; - Examples of mitigating measures. The prototype was tested against the following three criteria: 1) policy understanding; 2) policy effectiveness; 3) policy costs.The goal was to derive evidence-based recommendations relevant to ongoing policy debates around the future of AI regulation. Based on the results of the prototyping exercise and the feedback on the prototype law and playbook, the report advises lawmakers formulating requirements for AI risk assessments to take the following recommendations into account: - Focus on procedure instead of prescription as a way to determine high-risk AI applications; - Leverage a procedural risk assessment approach to determine what is the right set of regulatory requirements that apply to organisations deploying AI applications; - Provide specific and detailed guidance on how to implement an ADIA process, and release it alongside the law; - Be as specific as possible in the definition of risks within regulatory scope; - Improve documentation of risk assessment and decision-making processes by including justifications for mitigation choices; - Develop a sound taxonomy of the different AI actors involved in risk assessment; - Specify, as much as possible, the set of values that may be impacted by AI/ADM and provide guidance on how they may be in tension with one another; - Don’t reinvent the wheel; combine new processes with established ones, improving the overall approach.
人工智能影响评估:一个政策原型实验
本报告介绍了欧洲自动决策影响评估(ADIA)开环政策原型项目的成果。Open Loop (www.openloop.org)是一个全球性项目,将政策制定者和科技公司联系起来,帮助围绕人工智能和其他新兴技术制定有效且基于证据的政策。在这种特殊情况下,Open Loop与10家欧洲人工智能公司合作,共同创建了一个ADIA框架(政策原型),这些公司可以通过将其应用于自己的人工智能应用程序来进行测试。政策原型分为两部分:作为法律文本起草的原型法和作为剧本起草的原型指导。后者通过以下方式为参与者提供了关于执行ADIA的程序和实质性方面的额外指导:-逐步风险评估方法;-通常与人工智能应用相关的价值概述;-危害分类;-缓解措施的例子。该原型根据以下三个标准进行了测试:1)政策理解;2)政策有效性;3)政策成本。其目标是得出与围绕人工智能监管未来的政策辩论相关的循证建议。根据原型设计工作的结果以及对原型法和剧本的反馈,报告建议立法者在制定人工智能风险评估要求时考虑以下建议:-将重点放在程序上,而不是处方上,以此作为确定高风险人工智能应用的一种方式;-利用程序风险评估方法,确定适用于部署人工智能应用程序的组织的正确监管要求;-就如何实施ADIA流程提供具体和详细的指导,并与法律一起发布;-在监管范围内对风险的定义尽可能具体;-改进风险评估和决策过程的文件记录,包括减轻风险选择的理由;-为参与风险评估的不同人工智能参与者制定健全的分类;-尽可能详细说明可能受到AI/ADM影响的一组价值观,并就它们之间的紧张关系提供指导;-不要重新发明轮子;将新流程与已建立的流程结合起来,改进整体方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信