{"title":"AI Impact Assessment: A Policy Prototyping Experiment","authors":"Norberto Nuno Gomes de Andrade, V. Kontschieder","doi":"10.2139/ssrn.3772500","DOIUrl":null,"url":null,"abstract":"This report presents the outcomes of the Open Loop policy prototyping program on Automated Decision Impact Assessment (ADIA) in Europe. Open Loop (www.openloop.org) is a global program that connects policymakers and technology companies to help develop effective and evidence-based policies around AI and other emerging technologies. \n \nIn this particular case, Open Loop partnered with 10 European AI companies to co-create an ADIA framework (policy prototype) that those companies could test by applying it to their own \nAI applications. The policy prototype was structured into two parts: the prototype law, which was drafted as legal text, and the prototype guidance, which was drafted as a playbook. The latter provided participants with additional guidance on procedural and substantive aspects of performing the ADIA through: \n \n- A step-by-step risk assessment methodology; \n \n- An overview of values often associated with AI applications; \n \n- A taxonomy of harms; \n \n- Examples of mitigating measures. \n \nThe prototype was tested against the following three \ncriteria: 1) policy understanding; 2) policy effectiveness; 3) policy costs.The goal was to derive evidence-based recommendations relevant to ongoing policy debates around the future of AI regulation. \n \nBased on the results of the prototyping exercise and the feedback on the prototype law and playbook, the report advises lawmakers formulating requirements for AI risk assessments to take the following recommendations into account: \n \n- Focus on procedure instead of prescription as a way to determine high-risk AI applications; \n \n- Leverage a procedural risk assessment approach to determine what is the right set of regulatory requirements that apply to organisations deploying AI applications; \n \n- Provide specific and detailed guidance on how to implement an ADIA process, and release it alongside the law; \n \n- Be as specific as possible in the definition of risks within regulatory scope; \n \n- Improve documentation of risk assessment and decision-making processes by including justifications for mitigation choices; \n \n- Develop a sound taxonomy of the different AI actors involved in risk assessment; \n \n- Specify, as much as possible, the set of values that may be impacted by AI/ADM and provide guidance on how they may be in tension with one another; \n \n- Don’t reinvent the wheel; combine new processes with established ones, improving the overall approach.","PeriodicalId":139603,"journal":{"name":"Libraries & Information Technology eJournal","volume":"634 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Libraries & Information Technology eJournal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3772500","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
This report presents the outcomes of the Open Loop policy prototyping program on Automated Decision Impact Assessment (ADIA) in Europe. Open Loop (www.openloop.org) is a global program that connects policymakers and technology companies to help develop effective and evidence-based policies around AI and other emerging technologies.
In this particular case, Open Loop partnered with 10 European AI companies to co-create an ADIA framework (policy prototype) that those companies could test by applying it to their own
AI applications. The policy prototype was structured into two parts: the prototype law, which was drafted as legal text, and the prototype guidance, which was drafted as a playbook. The latter provided participants with additional guidance on procedural and substantive aspects of performing the ADIA through:
- A step-by-step risk assessment methodology;
- An overview of values often associated with AI applications;
- A taxonomy of harms;
- Examples of mitigating measures.
The prototype was tested against the following three
criteria: 1) policy understanding; 2) policy effectiveness; 3) policy costs.The goal was to derive evidence-based recommendations relevant to ongoing policy debates around the future of AI regulation.
Based on the results of the prototyping exercise and the feedback on the prototype law and playbook, the report advises lawmakers formulating requirements for AI risk assessments to take the following recommendations into account:
- Focus on procedure instead of prescription as a way to determine high-risk AI applications;
- Leverage a procedural risk assessment approach to determine what is the right set of regulatory requirements that apply to organisations deploying AI applications;
- Provide specific and detailed guidance on how to implement an ADIA process, and release it alongside the law;
- Be as specific as possible in the definition of risks within regulatory scope;
- Improve documentation of risk assessment and decision-making processes by including justifications for mitigation choices;
- Develop a sound taxonomy of the different AI actors involved in risk assessment;
- Specify, as much as possible, the set of values that may be impacted by AI/ADM and provide guidance on how they may be in tension with one another;
- Don’t reinvent the wheel; combine new processes with established ones, improving the overall approach.