对公众的风险

Peter G. Neumann
{"title":"对公众的风险","authors":"Peter G. Neumann","doi":"10.1145/3617946.3617947","DOIUrl":null,"url":null,"abstract":"Where to begin? Things seem to be getting wildly out of control. I make a comment in the arti cial intelligence subsection here, relating to risks of autonomous robotic airplane pilots (Pibot, Wingman). That comment is also applicable to self-driving vehicles, hospital AI, and other life-critical or mission-critical automated and semi-automated systems: We desperately need evidence-based assurance rather than over-hyped assertions that we should simply trust the developers and operating managers. Oth- erwise, the integrity and credibility of our rampantly increasing overdependence on untrustworthy technol- ogy may cause a collapse of trust in our technology. Undoubtedly, avoiding such a systemic failure may require radical changes in how computer science and system engineering should be taught and practiced, along with corresponding oversight, and serious penalties for failures. I once again invoke the Einstein Principle that Everything should be made as simple as possible, but no simpler. It is just one of many important principles that are needed to make our system development into a solid discipline. However, beware of simplistic legislation, charlatans, frauds, and other recurring risks.","PeriodicalId":432885,"journal":{"name":"ACM SIGSOFT Software Engineering Notes","volume":"47 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Risks to the Public\",\"authors\":\"Peter G. Neumann\",\"doi\":\"10.1145/3617946.3617947\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Where to begin? Things seem to be getting wildly out of control. I make a comment in the arti cial intelligence subsection here, relating to risks of autonomous robotic airplane pilots (Pibot, Wingman). That comment is also applicable to self-driving vehicles, hospital AI, and other life-critical or mission-critical automated and semi-automated systems: We desperately need evidence-based assurance rather than over-hyped assertions that we should simply trust the developers and operating managers. Oth- erwise, the integrity and credibility of our rampantly increasing overdependence on untrustworthy technol- ogy may cause a collapse of trust in our technology. Undoubtedly, avoiding such a systemic failure may require radical changes in how computer science and system engineering should be taught and practiced, along with corresponding oversight, and serious penalties for failures. I once again invoke the Einstein Principle that Everything should be made as simple as possible, but no simpler. It is just one of many important principles that are needed to make our system development into a solid discipline. However, beware of simplistic legislation, charlatans, frauds, and other recurring risks.\",\"PeriodicalId\":432885,\"journal\":{\"name\":\"ACM SIGSOFT Software Engineering Notes\",\"volume\":\"47 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-10-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM SIGSOFT Software Engineering Notes\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3617946.3617947\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM SIGSOFT Software Engineering Notes","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3617946.3617947","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

从哪里开始呢?事情似乎已经完全失去了控制。我在这里的人工智能部分做了一个评论,涉及自主机器人飞机飞行员(Pibot, Wingman)的风险。这句话同样适用于自动驾驶汽车、医院人工智能以及其他关键生命或关键任务的自动化和半自动系统:我们迫切需要基于证据的保证,而不是过度炒作的断言,即我们应该相信开发人员和运营经理。否则,我们对不值得信赖的技术日益增长的过度依赖的完整性和可信度可能会导致对我们技术的信任崩溃。毫无疑问,避免这种系统性的失败可能需要在如何教授和实践计算机科学和系统工程方面进行根本性的改变,以及相应的监督和对失败的严厉惩罚。我再次引用爱因斯坦原理,一切都应该尽可能简单,但不能更简单。这只是使我们的系统开发成为一门坚实的学科所需要的许多重要原则之一。然而,要小心简单化的立法、骗子、欺诈和其他反复出现的风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Risks to the Public
Where to begin? Things seem to be getting wildly out of control. I make a comment in the arti cial intelligence subsection here, relating to risks of autonomous robotic airplane pilots (Pibot, Wingman). That comment is also applicable to self-driving vehicles, hospital AI, and other life-critical or mission-critical automated and semi-automated systems: We desperately need evidence-based assurance rather than over-hyped assertions that we should simply trust the developers and operating managers. Oth- erwise, the integrity and credibility of our rampantly increasing overdependence on untrustworthy technol- ogy may cause a collapse of trust in our technology. Undoubtedly, avoiding such a systemic failure may require radical changes in how computer science and system engineering should be taught and practiced, along with corresponding oversight, and serious penalties for failures. I once again invoke the Einstein Principle that Everything should be made as simple as possible, but no simpler. It is just one of many important principles that are needed to make our system development into a solid discipline. However, beware of simplistic legislation, charlatans, frauds, and other recurring risks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信