{"title":"How Terminology Affects Users' Responses to System Failures.","authors":"Cindy Candrian, Anne Scherer","doi":"10.1177/00187208231202572","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>The objective of our research is to advance the understanding of behavioral responses to a system's error. By examining trust as a dynamic variable and drawing from attribution theory, we explain the underlying mechanism and suggest how terminology can be used to mitigate the so-called algorithm aversion. In this way, we show that the use of different terms may shape consumers' perceptions and provide guidance on how these differences can be mitigated.</p><p><strong>Background: </strong>Previous research has interchangeably used various terms to refer to a system and results regarding trust in systems have been ambiguous.</p><p><strong>Methods: </strong>Across three studies, we examine the effect of different system terminology on consumer behavior following a system failure.</p><p><strong>Results: </strong>Our results show that terminology crucially affects user behavior. Describing a system as \"AI\" (i.e., self-learning and perceived as more complex) instead of as \"algorithmic\" (i.e., a less complex rule-based system) leads to more favorable behavioral responses by users when a system error occurs.</p><p><strong>Conclusion: </strong>We suggest that in cases when a system's characteristics do not allow for it to be called \"AI,\" users should be provided with an explanation of why the system's error occurred, and task complexity should be pointed out. We highlight the importance of terminology, as this can unintentionally impact the robustness and replicability of research findings.</p><p><strong>Application: </strong>This research offers insights for industries utilizing AI and algorithmic systems, highlighting how strategic terminology use can shape user trust and response to errors, thereby enhancing system acceptance.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2082-2103"},"PeriodicalIF":2.9000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141081/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Factors","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/00187208231202572","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/9/21 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: The objective of our research is to advance the understanding of behavioral responses to a system's error. By examining trust as a dynamic variable and drawing from attribution theory, we explain the underlying mechanism and suggest how terminology can be used to mitigate the so-called algorithm aversion. In this way, we show that the use of different terms may shape consumers' perceptions and provide guidance on how these differences can be mitigated.
Background: Previous research has interchangeably used various terms to refer to a system and results regarding trust in systems have been ambiguous.
Methods: Across three studies, we examine the effect of different system terminology on consumer behavior following a system failure.
Results: Our results show that terminology crucially affects user behavior. Describing a system as "AI" (i.e., self-learning and perceived as more complex) instead of as "algorithmic" (i.e., a less complex rule-based system) leads to more favorable behavioral responses by users when a system error occurs.
Conclusion: We suggest that in cases when a system's characteristics do not allow for it to be called "AI," users should be provided with an explanation of why the system's error occurred, and task complexity should be pointed out. We highlight the importance of terminology, as this can unintentionally impact the robustness and replicability of research findings.
Application: This research offers insights for industries utilizing AI and algorithmic systems, highlighting how strategic terminology use can shape user trust and response to errors, thereby enhancing system acceptance.
期刊介绍:
Human Factors: The Journal of the Human Factors and Ergonomics Society publishes peer-reviewed scientific studies in human factors/ergonomics that present theoretical and practical advances concerning the relationship between people and technologies, tools, environments, and systems. Papers published in Human Factors leverage fundamental knowledge of human capabilities and limitations – and the basic understanding of cognitive, physical, behavioral, physiological, social, developmental, affective, and motivational aspects of human performance – to yield design principles; enhance training, selection, and communication; and ultimately improve human-system interfaces and sociotechnical systems that lead to safer and more effective outcomes.