人工智能影响评估的系统综述。

IF 10.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Bernd Carsten Stahl, Josephina Antoniou, Nitika Bhalla, Laurence Brooks, Philip Jansen, Blerta Lindqvist, Alexey Kirichenko, Samuel Marchal, Rowena Rodrigues, Nicole Santiago, Zuzanna Warso, David Wright
{"title":"人工智能影响评估的系统综述。","authors":"Bernd Carsten Stahl,&nbsp;Josephina Antoniou,&nbsp;Nitika Bhalla,&nbsp;Laurence Brooks,&nbsp;Philip Jansen,&nbsp;Blerta Lindqvist,&nbsp;Alexey Kirichenko,&nbsp;Samuel Marchal,&nbsp;Rowena Rodrigues,&nbsp;Nicole Santiago,&nbsp;Zuzanna Warso,&nbsp;David Wright","doi":"10.1007/s10462-023-10420-8","DOIUrl":null,"url":null,"abstract":"<div><p>Artificial intelligence (AI) is producing highly beneficial impacts in many domains, from transport to healthcare, from energy distribution to marketing, but it also raises concerns about undesirable ethical and social consequences. AI impact assessments (AI-IAs) are a way of identifying positive and negative impacts early on to safeguard AI’s benefits and avoid its downsides. This article describes the first systematic review of these AI-IAs. Working with a population of 181 documents, the authors identified 38 actual AI-IAs and subjected them to a rigorous qualitative analysis with regard to their purpose, scope, organisational context, expected issues, timeframe, process and methods, transparency and challenges. The review demonstrates some convergence between AI-IAs. It also shows that the field is not yet at the point of full agreement on content, structure and implementation. The article suggests that AI-IAs are best understood as means to stimulate reflection and discussion concerning the social and ethical consequences of AI ecosystems. Based on the analysis of existing AI-IAs, the authors describe a baseline process of implementing AI-IAs that can be implemented by AI developers and vendors and that can be used as a critical yardstick by regulators and external observers to evaluate organisations’ approaches to AI.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"56 11","pages":"12799 - 12831"},"PeriodicalIF":10.7000,"publicationDate":"2023-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-023-10420-8.pdf","citationCount":"10","resultStr":"{\"title\":\"A systematic review of artificial intelligence impact assessments\",\"authors\":\"Bernd Carsten Stahl,&nbsp;Josephina Antoniou,&nbsp;Nitika Bhalla,&nbsp;Laurence Brooks,&nbsp;Philip Jansen,&nbsp;Blerta Lindqvist,&nbsp;Alexey Kirichenko,&nbsp;Samuel Marchal,&nbsp;Rowena Rodrigues,&nbsp;Nicole Santiago,&nbsp;Zuzanna Warso,&nbsp;David Wright\",\"doi\":\"10.1007/s10462-023-10420-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Artificial intelligence (AI) is producing highly beneficial impacts in many domains, from transport to healthcare, from energy distribution to marketing, but it also raises concerns about undesirable ethical and social consequences. AI impact assessments (AI-IAs) are a way of identifying positive and negative impacts early on to safeguard AI’s benefits and avoid its downsides. This article describes the first systematic review of these AI-IAs. Working with a population of 181 documents, the authors identified 38 actual AI-IAs and subjected them to a rigorous qualitative analysis with regard to their purpose, scope, organisational context, expected issues, timeframe, process and methods, transparency and challenges. The review demonstrates some convergence between AI-IAs. It also shows that the field is not yet at the point of full agreement on content, structure and implementation. The article suggests that AI-IAs are best understood as means to stimulate reflection and discussion concerning the social and ethical consequences of AI ecosystems. Based on the analysis of existing AI-IAs, the authors describe a baseline process of implementing AI-IAs that can be implemented by AI developers and vendors and that can be used as a critical yardstick by regulators and external observers to evaluate organisations’ approaches to AI.</p></div>\",\"PeriodicalId\":8449,\"journal\":{\"name\":\"Artificial Intelligence Review\",\"volume\":\"56 11\",\"pages\":\"12799 - 12831\"},\"PeriodicalIF\":10.7000,\"publicationDate\":\"2023-03-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10462-023-10420-8.pdf\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence Review\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10462-023-10420-8\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-023-10420-8","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 10

摘要

人工智能在许多领域产生了非常有益的影响,从交通到医疗保健,从能源分配到营销,但它也引发了人们对不良道德和社会后果的担忧。人工智能影响评估(AI IA)是一种早期识别积极和消极影响的方法,以保护人工智能的利益并避免其负面影响。本文介绍了对这些人工智能IAs的首次系统综述。通过对181份文件的研究,作者确定了38份实际的人工智能IA,并对其目的、范围、组织背景、预期问题、时间框架、流程和方法、透明度和挑战进行了严格的定性分析。该综述显示了人工智能IA之间的一些趋同。它还表明,该领域尚未就内容、结构和执行达成完全一致。文章建议,人工智能IAs最好被理解为激发对人工智能生态系统的社会和伦理后果的反思和讨论的手段。基于对现有人工智能IAs的分析,作者描述了实施人工智能IA的基线过程,人工智能开发人员和供应商可以实施该过程,监管机构和外部观察员可以将其作为评估组织人工智能方法的关键尺度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

A systematic review of artificial intelligence impact assessments

A systematic review of artificial intelligence impact assessments

A systematic review of artificial intelligence impact assessments

A systematic review of artificial intelligence impact assessments

Artificial intelligence (AI) is producing highly beneficial impacts in many domains, from transport to healthcare, from energy distribution to marketing, but it also raises concerns about undesirable ethical and social consequences. AI impact assessments (AI-IAs) are a way of identifying positive and negative impacts early on to safeguard AI’s benefits and avoid its downsides. This article describes the first systematic review of these AI-IAs. Working with a population of 181 documents, the authors identified 38 actual AI-IAs and subjected them to a rigorous qualitative analysis with regard to their purpose, scope, organisational context, expected issues, timeframe, process and methods, transparency and challenges. The review demonstrates some convergence between AI-IAs. It also shows that the field is not yet at the point of full agreement on content, structure and implementation. The article suggests that AI-IAs are best understood as means to stimulate reflection and discussion concerning the social and ethical consequences of AI ecosystems. Based on the analysis of existing AI-IAs, the authors describe a baseline process of implementing AI-IAs that can be implemented by AI developers and vendors and that can be used as a critical yardstick by regulators and external observers to evaluate organisations’ approaches to AI.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Artificial Intelligence Review
Artificial Intelligence Review 工程技术-计算机:人工智能
CiteScore
22.00
自引率
3.30%
发文量
194
审稿时长
5.3 months
期刊介绍: Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信