Massimiliano L. Cappuccio, Jai C. Galliott, Friederike Eyssel, Alessandro Lanteri
{"title":"Autonomous Systems and Technology Resistance: New Tools for Monitoring Acceptance, Trust, and Tolerance","authors":"Massimiliano L. Cappuccio, Jai C. Galliott, Friederike Eyssel, Alessandro Lanteri","doi":"10.1007/s12369-023-01065-2","DOIUrl":null,"url":null,"abstract":"Abstract We introduce the notion of Tolerance for autonomous artificial agents (and its antithetical concept, Intolerance ), motivating its theoretical adoption in the fields of social robotics and human—agent interaction, where it can effectively complement two contiguous, but essentially distinct, constructs— Acceptance and Trust— that are broadly used by researchers. We offer a comprehensive conceptual model of Tolerance, construed as a user’s insusceptibility or resilience to Autonomy Estrangement (i.e., the uncanny sense of isolation and displacement experienced by the humans who believe, for right or wrong reasons, that robots can subvert and/or control their lives). We also refer to Intolerance to indicate the opposite property, that is the user’s susceptibility or proneness to Autonomy Estrangement. Thus, Tolerance and Intolerance are inverse representations of the same phenomenological continuum, with Intolerance increasing when Tolerance decreases and vice versa. While Acceptance and Trust measure how the user’s interaction with a particular robot is satisfying and efficacious, the dyad Tolerance/Intolerance reflects how the user’s attitude is affected by deeply held normative beliefs about robots in general. So defined, a low Tolerance (that is a high Intolerance) is expected to correlate to antagonistic responses toward the prospect of adoption: specifically, Intolerant attitudes predict the kind of anxious and hostile behaviours toward Agents that originate from the concerns that autonomous systems could deeply disrupt the lives of humans (affecting their work cultures, ways of living, systems of values, etc.) or dominate them (making humans redundant, undermining their authority, threatening their uniqueness, etc.). Thus, Negative beliefs and worldviews about Agents are the cause of the Intolerant attitude toward Agents, which predicts Autonomy Estrangement, which in turn correlates to low Adoption Propensity and avoidance and rejection behaviours.","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"181 1","pages":"0"},"PeriodicalIF":3.8000,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Social Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s12369-023-01065-2","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract We introduce the notion of Tolerance for autonomous artificial agents (and its antithetical concept, Intolerance ), motivating its theoretical adoption in the fields of social robotics and human—agent interaction, where it can effectively complement two contiguous, but essentially distinct, constructs— Acceptance and Trust— that are broadly used by researchers. We offer a comprehensive conceptual model of Tolerance, construed as a user’s insusceptibility or resilience to Autonomy Estrangement (i.e., the uncanny sense of isolation and displacement experienced by the humans who believe, for right or wrong reasons, that robots can subvert and/or control their lives). We also refer to Intolerance to indicate the opposite property, that is the user’s susceptibility or proneness to Autonomy Estrangement. Thus, Tolerance and Intolerance are inverse representations of the same phenomenological continuum, with Intolerance increasing when Tolerance decreases and vice versa. While Acceptance and Trust measure how the user’s interaction with a particular robot is satisfying and efficacious, the dyad Tolerance/Intolerance reflects how the user’s attitude is affected by deeply held normative beliefs about robots in general. So defined, a low Tolerance (that is a high Intolerance) is expected to correlate to antagonistic responses toward the prospect of adoption: specifically, Intolerant attitudes predict the kind of anxious and hostile behaviours toward Agents that originate from the concerns that autonomous systems could deeply disrupt the lives of humans (affecting their work cultures, ways of living, systems of values, etc.) or dominate them (making humans redundant, undermining their authority, threatening their uniqueness, etc.). Thus, Negative beliefs and worldviews about Agents are the cause of the Intolerant attitude toward Agents, which predicts Autonomy Estrangement, which in turn correlates to low Adoption Propensity and avoidance and rejection behaviours.
期刊介绍:
Social Robotics is the study of robots that are able to interact and communicate among themselves, with humans, and with the environment, within the social and cultural structure attached to its role. The journal covers a broad spectrum of topics related to the latest technologies, new research results and developments in the area of social robotics on all levels, from developments in core enabling technologies to system integration, aesthetic design, applications and social implications. It provides a platform for like-minded researchers to present their findings and latest developments in social robotics, covering relevant advances in engineering, computing, arts and social sciences.
The journal publishes original, peer reviewed articles and contributions on innovative ideas and concepts, new discoveries and improvements, as well as novel applications, by leading researchers and developers regarding the latest fundamental advances in the core technologies that form the backbone of social robotics, distinguished developmental projects in the area, as well as seminal works in aesthetic design, ethics and philosophy, studies on social impact and influence, pertaining to social robotics.