避免不利的自主代理行为

IF 4.5 2区 工程技术 Q1 COMPUTER SCIENCE, CYBERNETICS
P. Hancock
{"title":"避免不利的自主代理行为","authors":"P. Hancock","doi":"10.1080/07370024.2021.1970556","DOIUrl":null,"url":null,"abstract":"Few today would dispute that the age of autonomous machines is nearly upon us (cf., Kurzweil, 2005; Moravec, 1988), if it is not already. While it is doubtful that one can identify any fully autonomous machine system at this point in time, especially one that is openly and publicly acknowledged to be so, it is far less debatable that our present line of technological evolution is leading toward this eventuality (Endsley, 2017; Hancock, 2017a). It is this specter of the consequences of even existentially threatening adverse events, emanating from these penetrative autonomous systems, which is the focus of the present work. The impending and imperative question is what we intend to do about these prospective challenges? As with essentially all of human discourse, we can imagine two sides to this question. One side is represented by an optimistic vision of a near utopian future, underwritten by AI-support and some inherent degree of intrinsic benevolence. The opposing vision promulgates a dystopian nightmare in which machines have gained almost total ascendency and only a few “plucky” humans remain. The latter is most especially a featured trope of the human heroic narrative (Campbell, 1949). It will be most probably the case that neither of the extremes on this putative spectrum of possibilities will represent the eventual reality that we will actually experience. However, the ground rules are now in the process of being set which will predispose us toward one of these directions over the other (Feng et al., 2016; Hancock, 2017a). Traditionally, many have approached this general form of technological inquiry by asking questions about strengths, weaknesses, threats, and opportunities. Consequently, it is within this general framework that this present work is offered. What follows are some overall considerations of the balance of the value of such autonomous systems’ inauguration and penetration. These observations provide the bedrock from which to consider the specific strengths, weaknesses, threats (risks), and promises (opportunity) dimensions. The specific consideration of the application of the protective strategies of the well-known hierarchy of controls (Haddon, 1973) then acts as a final prefatory consideration to the concluding discussion which examines the adverse actions of autonomous technological systems as a potential human existential threat. The term autonomy is one that has been, and still currently is, the subject of much attention, debate, and even abuse (and see Ezenkwu & Starkey, 2019). To an extent, the term seems to be flexible enough to encompass almost whatever the proximal user requires of it. For example, a simple, descriptive word-cloud (Figure 1), illustrates the various terminologies that surrounds our present use of this focal term. It is not the present purpose here to engage in a long, polemic and potentially unedifying dispute specifically about the term’s definition. This is because the present concern is with autonomous technological systems, and not about the greater meaning of autonomy per se, either as a property or as a process. The definition which is adopted here is that: “autonomous systems are generative and learn, evolve, and permanently change their functional capacities as a result of the input of operational and contextual information. Their actions necessarily become more","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"29 1","pages":"211 - 236"},"PeriodicalIF":4.5000,"publicationDate":"2021-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":"{\"title\":\"Avoiding adverse autonomous agent actions\",\"authors\":\"P. Hancock\",\"doi\":\"10.1080/07370024.2021.1970556\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Few today would dispute that the age of autonomous machines is nearly upon us (cf., Kurzweil, 2005; Moravec, 1988), if it is not already. While it is doubtful that one can identify any fully autonomous machine system at this point in time, especially one that is openly and publicly acknowledged to be so, it is far less debatable that our present line of technological evolution is leading toward this eventuality (Endsley, 2017; Hancock, 2017a). It is this specter of the consequences of even existentially threatening adverse events, emanating from these penetrative autonomous systems, which is the focus of the present work. The impending and imperative question is what we intend to do about these prospective challenges? As with essentially all of human discourse, we can imagine two sides to this question. One side is represented by an optimistic vision of a near utopian future, underwritten by AI-support and some inherent degree of intrinsic benevolence. The opposing vision promulgates a dystopian nightmare in which machines have gained almost total ascendency and only a few “plucky” humans remain. The latter is most especially a featured trope of the human heroic narrative (Campbell, 1949). It will be most probably the case that neither of the extremes on this putative spectrum of possibilities will represent the eventual reality that we will actually experience. However, the ground rules are now in the process of being set which will predispose us toward one of these directions over the other (Feng et al., 2016; Hancock, 2017a). Traditionally, many have approached this general form of technological inquiry by asking questions about strengths, weaknesses, threats, and opportunities. Consequently, it is within this general framework that this present work is offered. What follows are some overall considerations of the balance of the value of such autonomous systems’ inauguration and penetration. These observations provide the bedrock from which to consider the specific strengths, weaknesses, threats (risks), and promises (opportunity) dimensions. The specific consideration of the application of the protective strategies of the well-known hierarchy of controls (Haddon, 1973) then acts as a final prefatory consideration to the concluding discussion which examines the adverse actions of autonomous technological systems as a potential human existential threat. The term autonomy is one that has been, and still currently is, the subject of much attention, debate, and even abuse (and see Ezenkwu & Starkey, 2019). To an extent, the term seems to be flexible enough to encompass almost whatever the proximal user requires of it. For example, a simple, descriptive word-cloud (Figure 1), illustrates the various terminologies that surrounds our present use of this focal term. It is not the present purpose here to engage in a long, polemic and potentially unedifying dispute specifically about the term’s definition. This is because the present concern is with autonomous technological systems, and not about the greater meaning of autonomy per se, either as a property or as a process. The definition which is adopted here is that: “autonomous systems are generative and learn, evolve, and permanently change their functional capacities as a result of the input of operational and contextual information. Their actions necessarily become more\",\"PeriodicalId\":56306,\"journal\":{\"name\":\"Human-Computer Interaction\",\"volume\":\"29 1\",\"pages\":\"211 - 236\"},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2021-11-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"16\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human-Computer Interaction\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1080/07370024.2021.1970556\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human-Computer Interaction","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1080/07370024.2021.1970556","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 16

摘要

今天很少有人会否认,自主机器的时代即将来临(参见,Kurzweil, 2005;Moravec, 1988),如果还没有的话。虽然在这个时间点上,人们能否识别出任何完全自主的机器系统是值得怀疑的,尤其是一个公开和公开承认的机器系统,但我们目前的技术进化路线正朝着这种可能性发展,这一点是毋庸置疑的(Endsley, 2017;汉考克,2017)。从这些渗透的自主系统中产生的甚至是威胁存在的不良事件的后果的幽灵,这是当前工作的重点。迫在眉睫和迫切的问题是,我们打算如何应对这些潜在的挑战?就像人类所有的话语一样,我们可以想象这个问题的两个方面。一方是对未来乌托邦的乐观看法,由人工智能支持和某种程度的内在仁慈所支撑。相反的观点宣扬了一种反乌托邦式的噩梦,在这种噩梦中,机器几乎占据了全部优势,只有少数“勇敢”的人类幸存下来。后者是人类英雄叙事中最具特色的比喻(Campbell, 1949)。很可能的情况是,这一系列假定的可能性中的任何一个极端都不能代表我们实际经历的最终现实。然而,基本规则现在正在制定过程中,这将使我们倾向于其中一个方向而不是另一个方向(Feng等人,2016;汉考克,2017)。传统上,许多人通过询问优势、劣势、威胁和机会来处理这种一般形式的技术调查。因此,本文正是在这一总体框架内提出的。以下是对此类自主系统启动和渗透的价值平衡的一些总体考虑。这些观察为考虑特定的优势、劣势、威胁(风险)和承诺(机会)维度提供了基础。对众所周知的控制层次的保护策略应用的具体考虑(Haddon, 1973),然后作为最后的前置考虑,最后的讨论将审查自主技术系统的不利行为作为潜在的人类生存威胁。自治一词一直是,现在仍然是备受关注、争论甚至滥用的主题(见Ezenkwu & Starkey, 2019)。在某种程度上,这个术语似乎足够灵活,可以涵盖几乎所有近端用户的要求。例如,一个简单的描述性词云(图1)说明了围绕我们当前使用的这个重点术语的各种术语。在这里,我们的目的并不是要就这个术语的定义进行一场冗长的、争论性的、潜在的无益的争论。这是因为目前关注的是自主技术系统,而不是自主性本身的更大意义,无论是作为一种属性还是作为一个过程。这里采用的定义是:“自主系统是生成的,可以学习、进化,并根据操作和上下文信息的输入永久地改变其功能能力。”他们的行为必然变得更加
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Avoiding adverse autonomous agent actions
Few today would dispute that the age of autonomous machines is nearly upon us (cf., Kurzweil, 2005; Moravec, 1988), if it is not already. While it is doubtful that one can identify any fully autonomous machine system at this point in time, especially one that is openly and publicly acknowledged to be so, it is far less debatable that our present line of technological evolution is leading toward this eventuality (Endsley, 2017; Hancock, 2017a). It is this specter of the consequences of even existentially threatening adverse events, emanating from these penetrative autonomous systems, which is the focus of the present work. The impending and imperative question is what we intend to do about these prospective challenges? As with essentially all of human discourse, we can imagine two sides to this question. One side is represented by an optimistic vision of a near utopian future, underwritten by AI-support and some inherent degree of intrinsic benevolence. The opposing vision promulgates a dystopian nightmare in which machines have gained almost total ascendency and only a few “plucky” humans remain. The latter is most especially a featured trope of the human heroic narrative (Campbell, 1949). It will be most probably the case that neither of the extremes on this putative spectrum of possibilities will represent the eventual reality that we will actually experience. However, the ground rules are now in the process of being set which will predispose us toward one of these directions over the other (Feng et al., 2016; Hancock, 2017a). Traditionally, many have approached this general form of technological inquiry by asking questions about strengths, weaknesses, threats, and opportunities. Consequently, it is within this general framework that this present work is offered. What follows are some overall considerations of the balance of the value of such autonomous systems’ inauguration and penetration. These observations provide the bedrock from which to consider the specific strengths, weaknesses, threats (risks), and promises (opportunity) dimensions. The specific consideration of the application of the protective strategies of the well-known hierarchy of controls (Haddon, 1973) then acts as a final prefatory consideration to the concluding discussion which examines the adverse actions of autonomous technological systems as a potential human existential threat. The term autonomy is one that has been, and still currently is, the subject of much attention, debate, and even abuse (and see Ezenkwu & Starkey, 2019). To an extent, the term seems to be flexible enough to encompass almost whatever the proximal user requires of it. For example, a simple, descriptive word-cloud (Figure 1), illustrates the various terminologies that surrounds our present use of this focal term. It is not the present purpose here to engage in a long, polemic and potentially unedifying dispute specifically about the term’s definition. This is because the present concern is with autonomous technological systems, and not about the greater meaning of autonomy per se, either as a property or as a process. The definition which is adopted here is that: “autonomous systems are generative and learn, evolve, and permanently change their functional capacities as a result of the input of operational and contextual information. Their actions necessarily become more
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Human-Computer Interaction
Human-Computer Interaction 工程技术-计算机:控制论
CiteScore
12.20
自引率
3.80%
发文量
15
审稿时长
>12 weeks
期刊介绍: Human-Computer Interaction (HCI) is a multidisciplinary journal defining and reporting on fundamental research in human-computer interaction. The goal of HCI is to be a journal of the highest quality that combines the best research and design work to extend our understanding of human-computer interaction. The target audience is the research community with an interest in both the scientific implications and practical relevance of how interactive computer systems should be designed and how they are actually used. HCI is concerned with the theoretical, empirical, and methodological issues of interaction science and system design as it affects the user.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信