从自主和智能系统的失败中学习:事故、安全和风险的社会技术来源。

Carl Macrae
{"title":"从自主和智能系统的失败中学习:事故、安全和风险的社会技术来源。","authors":"Carl Macrae","doi":"10.1111/risa.13850","DOIUrl":null,"url":null,"abstract":"<p><p>Efforts to develop autonomous and intelligent systems (AIS) have exploded across a range of settings in recent years, from self-driving cars to medical diagnostic chatbots. These have the potential to bring enormous benefits to society but also have the potential to introduce new-or amplify existing-risks. As these emerging technologies become more widespread, one of the most critical risk management challenges is to ensure that failures of AIS can be rigorously analyzed and understood so that the safety of these systems can be effectively governed and improved. AIS are necessarily developed and deployed within complex human, social, and organizational systems, but to date there has been little systematic examination of the sociotechnical sources of risk and failure in AIS. Accordingly, this article develops a conceptual framework that characterizes key sociotechnical sources of risk in AIS by reanalyzing one of the most publicly reported failures to date: the 2018 fatal crash of Uber's self-driving car. Publicly available investigative reports were systematically analyzed using constant comparative analysis to identify key sources and patterns of sociotechnical risk. Five fundamental domains of sociotechnical risk were conceptualized-structural, organizational, technological, epistemic, and cultural-each indicated by particular patterns of sociotechnical failure. The resulting SOTEC framework of sociotechnical risk in AIS extends existing theories of risk in complex systems and highlights important practical and theoretical implications for managing risk and developing infrastructures of learning in AIS.</p>","PeriodicalId":517072,"journal":{"name":"Risk analysis : an official publication of the Society for Risk Analysis","volume":" ","pages":"1999-2025"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":"{\"title\":\"Learning from the Failure of Autonomous and Intelligent Systems: Accidents, Safety, and Sociotechnical Sources of Risk.\",\"authors\":\"Carl Macrae\",\"doi\":\"10.1111/risa.13850\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Efforts to develop autonomous and intelligent systems (AIS) have exploded across a range of settings in recent years, from self-driving cars to medical diagnostic chatbots. These have the potential to bring enormous benefits to society but also have the potential to introduce new-or amplify existing-risks. As these emerging technologies become more widespread, one of the most critical risk management challenges is to ensure that failures of AIS can be rigorously analyzed and understood so that the safety of these systems can be effectively governed and improved. AIS are necessarily developed and deployed within complex human, social, and organizational systems, but to date there has been little systematic examination of the sociotechnical sources of risk and failure in AIS. Accordingly, this article develops a conceptual framework that characterizes key sociotechnical sources of risk in AIS by reanalyzing one of the most publicly reported failures to date: the 2018 fatal crash of Uber's self-driving car. Publicly available investigative reports were systematically analyzed using constant comparative analysis to identify key sources and patterns of sociotechnical risk. Five fundamental domains of sociotechnical risk were conceptualized-structural, organizational, technological, epistemic, and cultural-each indicated by particular patterns of sociotechnical failure. The resulting SOTEC framework of sociotechnical risk in AIS extends existing theories of risk in complex systems and highlights important practical and theoretical implications for managing risk and developing infrastructures of learning in AIS.</p>\",\"PeriodicalId\":517072,\"journal\":{\"name\":\"Risk analysis : an official publication of the Society for Risk Analysis\",\"volume\":\" \",\"pages\":\"1999-2025\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"20\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Risk analysis : an official publication of the Society for Risk Analysis\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1111/risa.13850\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2021/11/23 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Risk analysis : an official publication of the Society for Risk Analysis","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1111/risa.13850","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2021/11/23 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 20

摘要

近年来,从自动驾驶汽车到医疗诊断聊天机器人,开发自主和智能系统(AIS)的努力在一系列环境中爆炸式增长。这些技术有可能给社会带来巨大的利益,但也有可能引入新的或放大现有的风险。随着这些新兴技术变得越来越普遍,最关键的风险管理挑战之一是确保能够严格分析和理解AIS的故障,以便有效地管理和改进这些系统的安全性。AIS必须在复杂的人类、社会和组织系统中开发和部署,但迄今为止,对AIS风险和失败的社会技术来源的系统检查很少。因此,本文通过重新分析迄今为止最公开报道的失败之一:2018年优步自动驾驶汽车的致命事故,开发了一个概念性框架,描述了人工智能系统中主要的社会技术风险来源。通过持续的比较分析,系统地分析了公开的调查报告,以确定社会技术风险的主要来源和模式。社会技术风险的五个基本领域是概念化的——结构、组织、技术、认知和文化——每个领域都有特定的社会技术失败模式。由此产生的AIS社会技术风险SOTEC框架扩展了复杂系统中现有的风险理论,并强调了在AIS中管理风险和发展学习基础设施的重要实践和理论意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Learning from the Failure of Autonomous and Intelligent Systems: Accidents, Safety, and Sociotechnical Sources of Risk.

Efforts to develop autonomous and intelligent systems (AIS) have exploded across a range of settings in recent years, from self-driving cars to medical diagnostic chatbots. These have the potential to bring enormous benefits to society but also have the potential to introduce new-or amplify existing-risks. As these emerging technologies become more widespread, one of the most critical risk management challenges is to ensure that failures of AIS can be rigorously analyzed and understood so that the safety of these systems can be effectively governed and improved. AIS are necessarily developed and deployed within complex human, social, and organizational systems, but to date there has been little systematic examination of the sociotechnical sources of risk and failure in AIS. Accordingly, this article develops a conceptual framework that characterizes key sociotechnical sources of risk in AIS by reanalyzing one of the most publicly reported failures to date: the 2018 fatal crash of Uber's self-driving car. Publicly available investigative reports were systematically analyzed using constant comparative analysis to identify key sources and patterns of sociotechnical risk. Five fundamental domains of sociotechnical risk were conceptualized-structural, organizational, technological, epistemic, and cultural-each indicated by particular patterns of sociotechnical failure. The resulting SOTEC framework of sociotechnical risk in AIS extends existing theories of risk in complex systems and highlights important practical and theoretical implications for managing risk and developing infrastructures of learning in AIS.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信