{"title":"技术自主的社会生产","authors":"V. Kaptelinin","doi":"10.1080/07370024.2021.1976641","DOIUrl":null,"url":null,"abstract":"The discussion of potential dangers, brought about by intelligent machines, can be traced back at least to Wiener (1960). However, it has never been more needed than it is now. Current technological developments make these dangers increasingly concrete and real, and so the paper by Hancock (this volume) is particularly timely. By systematically presenting and analyzing some of the key issues, problems, and approaches in the current discourse on autonomous agents, the paper does a valuable job in further engaging the HCI research community in the discourse. A key strength of the paper, in my view, is that it is apparently designed to invite comments, disagreements, and alternative perspectives. In this commentary, I reflect on a central theme in Hancock’s analysis, namely, the emergence of agents’ own intentions as a (presumably inevitable) result of the ongoing progress in artificial intelligence (AI). This is one of the most fascinating issues in the entire field of AI. The theme has not only become an object of academic debates, but also made a massive impact on popular culture (as exemplified, for instance, by movies and TV series, such as Blade Runner or Westworld). The question at the heart of the issue is: How and why can an AI system be transformed from a piece of human-controlled technology with constrained autonomy (limited to deciding how to perform the task assigned to it) to a fully autonomous agent, acting on its own intentions? Current attempts to envision a future, in which fully autonomous AI systems become a reality, often gloss over the specific causes and mechanisms of such a transformation. In some cases, e.g., in “slave uprising” scenarios, is it implied that the transformation may happen because designers, when trying to create systems that are as similar to humans as possible, fall victims, often literally, to their own success. At the most basic level, the underlying assumption appears to be that increasingly more advanced cognitive capabilities of a technology – even if they are only used when acting on someone or something else’s intentions – eventually lead to the development of self-awareness, which, in turn, gives rise to full autonomy. Hancock outlines a particular perspective on how agents’ full autonomy can be expected to develop. According to this perspective, dubbed “isles of autonomy,” the path to full autonomy starts with the emergence of isolated technologies having constrained autonomy, such as autonomous vehicles or autopilots. Each of these isles, when young and unstable, is initially surrounded and supported by human attendants, who take care of them (similarly to taking care of “prematurely born neonates”). Over time, the isles grow and eventually merge into a fully autonomous system. This perspective, even if rather metaphorical, potentially provides useful guidance for thinking about autonomous agents. However, the perspective does not clarify why and how exactly a constrained autonomy transforms into a full autonomy over the course of the described development. Arguably, the entire development may, in principle, take place without ever progressing to full autonomy. First, when an isle expands and the technology in question becomes less dependent on human support and maintenance, the autonomy of that technology does not necessarily become less constrained, because its tasks may still be assigned to it by someone or something else. For instance,","PeriodicalId":56306,"journal":{"name":"Human-Computer Interaction","volume":"21 1","pages":"256 - 258"},"PeriodicalIF":4.5000,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"The social production of technological autonomy\",\"authors\":\"V. Kaptelinin\",\"doi\":\"10.1080/07370024.2021.1976641\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The discussion of potential dangers, brought about by intelligent machines, can be traced back at least to Wiener (1960). However, it has never been more needed than it is now. Current technological developments make these dangers increasingly concrete and real, and so the paper by Hancock (this volume) is particularly timely. By systematically presenting and analyzing some of the key issues, problems, and approaches in the current discourse on autonomous agents, the paper does a valuable job in further engaging the HCI research community in the discourse. A key strength of the paper, in my view, is that it is apparently designed to invite comments, disagreements, and alternative perspectives. In this commentary, I reflect on a central theme in Hancock’s analysis, namely, the emergence of agents’ own intentions as a (presumably inevitable) result of the ongoing progress in artificial intelligence (AI). This is one of the most fascinating issues in the entire field of AI. The theme has not only become an object of academic debates, but also made a massive impact on popular culture (as exemplified, for instance, by movies and TV series, such as Blade Runner or Westworld). The question at the heart of the issue is: How and why can an AI system be transformed from a piece of human-controlled technology with constrained autonomy (limited to deciding how to perform the task assigned to it) to a fully autonomous agent, acting on its own intentions? Current attempts to envision a future, in which fully autonomous AI systems become a reality, often gloss over the specific causes and mechanisms of such a transformation. In some cases, e.g., in “slave uprising” scenarios, is it implied that the transformation may happen because designers, when trying to create systems that are as similar to humans as possible, fall victims, often literally, to their own success. At the most basic level, the underlying assumption appears to be that increasingly more advanced cognitive capabilities of a technology – even if they are only used when acting on someone or something else’s intentions – eventually lead to the development of self-awareness, which, in turn, gives rise to full autonomy. Hancock outlines a particular perspective on how agents’ full autonomy can be expected to develop. According to this perspective, dubbed “isles of autonomy,” the path to full autonomy starts with the emergence of isolated technologies having constrained autonomy, such as autonomous vehicles or autopilots. Each of these isles, when young and unstable, is initially surrounded and supported by human attendants, who take care of them (similarly to taking care of “prematurely born neonates”). Over time, the isles grow and eventually merge into a fully autonomous system. This perspective, even if rather metaphorical, potentially provides useful guidance for thinking about autonomous agents. However, the perspective does not clarify why and how exactly a constrained autonomy transforms into a full autonomy over the course of the described development. Arguably, the entire development may, in principle, take place without ever progressing to full autonomy. First, when an isle expands and the technology in question becomes less dependent on human support and maintenance, the autonomy of that technology does not necessarily become less constrained, because its tasks may still be assigned to it by someone or something else. For instance,\",\"PeriodicalId\":56306,\"journal\":{\"name\":\"Human-Computer Interaction\",\"volume\":\"21 1\",\"pages\":\"256 - 258\"},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2022-01-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human-Computer Interaction\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1080/07370024.2021.1976641\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human-Computer Interaction","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1080/07370024.2021.1976641","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
The discussion of potential dangers, brought about by intelligent machines, can be traced back at least to Wiener (1960). However, it has never been more needed than it is now. Current technological developments make these dangers increasingly concrete and real, and so the paper by Hancock (this volume) is particularly timely. By systematically presenting and analyzing some of the key issues, problems, and approaches in the current discourse on autonomous agents, the paper does a valuable job in further engaging the HCI research community in the discourse. A key strength of the paper, in my view, is that it is apparently designed to invite comments, disagreements, and alternative perspectives. In this commentary, I reflect on a central theme in Hancock’s analysis, namely, the emergence of agents’ own intentions as a (presumably inevitable) result of the ongoing progress in artificial intelligence (AI). This is one of the most fascinating issues in the entire field of AI. The theme has not only become an object of academic debates, but also made a massive impact on popular culture (as exemplified, for instance, by movies and TV series, such as Blade Runner or Westworld). The question at the heart of the issue is: How and why can an AI system be transformed from a piece of human-controlled technology with constrained autonomy (limited to deciding how to perform the task assigned to it) to a fully autonomous agent, acting on its own intentions? Current attempts to envision a future, in which fully autonomous AI systems become a reality, often gloss over the specific causes and mechanisms of such a transformation. In some cases, e.g., in “slave uprising” scenarios, is it implied that the transformation may happen because designers, when trying to create systems that are as similar to humans as possible, fall victims, often literally, to their own success. At the most basic level, the underlying assumption appears to be that increasingly more advanced cognitive capabilities of a technology – even if they are only used when acting on someone or something else’s intentions – eventually lead to the development of self-awareness, which, in turn, gives rise to full autonomy. Hancock outlines a particular perspective on how agents’ full autonomy can be expected to develop. According to this perspective, dubbed “isles of autonomy,” the path to full autonomy starts with the emergence of isolated technologies having constrained autonomy, such as autonomous vehicles or autopilots. Each of these isles, when young and unstable, is initially surrounded and supported by human attendants, who take care of them (similarly to taking care of “prematurely born neonates”). Over time, the isles grow and eventually merge into a fully autonomous system. This perspective, even if rather metaphorical, potentially provides useful guidance for thinking about autonomous agents. However, the perspective does not clarify why and how exactly a constrained autonomy transforms into a full autonomy over the course of the described development. Arguably, the entire development may, in principle, take place without ever progressing to full autonomy. First, when an isle expands and the technology in question becomes less dependent on human support and maintenance, the autonomy of that technology does not necessarily become less constrained, because its tasks may still be assigned to it by someone or something else. For instance,
期刊介绍:
Human-Computer Interaction (HCI) is a multidisciplinary journal defining and reporting
on fundamental research in human-computer interaction. The goal of HCI is to be a journal
of the highest quality that combines the best research and design work to extend our
understanding of human-computer interaction. The target audience is the research
community with an interest in both the scientific implications and practical relevance of
how interactive computer systems should be designed and how they are actually used. HCI is
concerned with the theoretical, empirical, and methodological issues of interaction science
and system design as it affects the user.