Human FactorsPub Date : 2024-08-18DOI: 10.1177/00187208241273379
Jade Driggs, Lisa Vangsness
{"title":"Judgments of Difficulty (JODs) While Observing an Automated System Support the Media Equation and Unique Agent Hypotheses.","authors":"Jade Driggs, Lisa Vangsness","doi":"10.1177/00187208241273379","DOIUrl":"https://doi.org/10.1177/00187208241273379","url":null,"abstract":"<p><strong>Objective: </strong>We investigated how people used cues to make Judgments of Difficulty (JODs) while observing automation perform a task and when performing this task themselves.</p><p><strong>Background: </strong>Task difficulty is a factor affecting trust in automation; however, no research has explored how individuals make JODs when watching automation or whether these judgments are similar to or different from those made while watching humans. Lastly, it is unclear how cue use when observing automation differs as a function of experience.</p><p><strong>Method: </strong>The study involved a visual search task. Some participants performed the task first, then watched automation complete it. Others watched and then performed, and a third group alternated between performing and watching. After each trial, participants made a JOD by indicating if the task was easier or harder than before. Task difficulty randomly changed every five trials.</p><p><strong>Results: </strong>A Bayesian regression suggested that cue use is similar to and different from cue use while observing humans. For central cues, support for the UAH was bounded by experience: those who performed the task first underweighted central cues when making JODs, relative to their counterparts in a previous study involving humans. For peripheral cues, support for the MEH was unequivocal and participants weighted cues similarly across observation sources.</p><p><strong>Conclusion: </strong>People weighted cues similar to and different from when they watched automation perform a task relative to when they watched humans, supporting the Media Equation and Unique Agent Hypotheses.</p><p><strong>Application: </strong>This study adds to a growing understanding of judgments in human-human and human-automation interactions.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"187208241273379"},"PeriodicalIF":2.9,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142001478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2024-08-01Epub Date: 2023-09-26DOI: 10.1177/00187208231201054
Amy S McDonnell, Kaedyn W Crabtree, Joel M Cooper, David L Strayer
{"title":"This Is Your Brain on Autopilot 2.0: The Influence of Practice on Driver Workload and Engagement During On-Road, Partially Automated Driving.","authors":"Amy S McDonnell, Kaedyn W Crabtree, Joel M Cooper, David L Strayer","doi":"10.1177/00187208231201054","DOIUrl":"10.1177/00187208231201054","url":null,"abstract":"<p><strong>Objective: </strong>This on-road study employed behavioral and neurophysiological measurement techniques to assess the influence of six weeks of practice driving a Level 2 partially automated vehicle on driver workload and engagement.</p><p><strong>Background: </strong>Level 2 partial automation requires a driver to maintain supervisory control of the vehicle to detect \"edge cases\" that the automation is not equipped to handle. There is mixed evidence regarding whether drivers can do so effectively. There is also an open question regarding how practice and familiarity with automation influence driver cognitive states over time.</p><p><strong>Method: </strong>Behavioral and neurophysiological measures of driver workload and visual engagement were recorded from 30 participants at two testing sessions-with a six-week familiarization period in-between. At both testing sessions, participants drove a vehicle with partial automation engaged (Level 2) and not engaged (Level 0) on two interstate highways while reaction times to the detection response task (DRT) and neurophysiological (EEG) metrics of frontal theta and parietal alpha were recorded.</p><p><strong>Results: </strong>DRT results demonstrated that partially automated driving placed more cognitive load on drivers than manual driving and six weeks of practice decreased driver workload-though only when the driving environment was relatively simple. EEG metrics of frontal theta and parietal alpha showed null effects of partial automation.</p><p><strong>Conclusion: </strong>Driver workload was influenced by level of automation, specific highway characteristics, and by practice over time, but only on a behavioral level and not on a neural level.</p><p><strong>Application: </strong>These findings expand our understanding of the influence of practice on driver cognitive states under Level 2 partial automation.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2025-2040"},"PeriodicalIF":2.9,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141086/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41162897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2024-08-01Epub Date: 2023-11-18DOI: 10.1177/00187208231212259
Joseph W Hendricks, S Camille Peres
{"title":"An Experimental Investigation of Hazard Statement Compliance in Procedures Using Eye Tracking Technology: Should Task be Included in the C-HIP Model?","authors":"Joseph W Hendricks, S Camille Peres","doi":"10.1177/00187208231212259","DOIUrl":"10.1177/00187208231212259","url":null,"abstract":"<p><strong>Objective: </strong>Using eye tracking technology, this study sought to determine if differences in hazard statement (HS) compliance based on design elements are attributable to attention maintenance (AM).</p><p><strong>Background: </strong>Recent empirical work has demonstrated counter-intuitive findings for HS designs embedded in procedures. Specifically, prevalent HS designs in procedures were associated with lower compliance.</p><p><strong>Method: </strong>The current study utilized eye tracking technology to determine whether participants are attending to HSs differently based on the inclusion or absence of visually distinct HS design elements typically used for consumer products. We used two different designs that previously yielded the largest gap in HS compliance. In a fully-crossed design, 33 participants completed four rounds of tasks using four procedures with embedded HSs. To assess AM, eye tracking was used to measure gaze and fixation duration.</p><p><strong>Results: </strong>The results indicated there are differences in AM between the two designs. The HSs that included elements traditionally considered effective in the consumer products literature elicited lower fixation duration times, and were associated with lower compliance. However, AM did not mediate the design effect on compliance.</p><p><strong>Conclusions: </strong>The study results suggest the design of HSs are impacting individuals as early as the AM stage of the C-HIP model. The absence of HS design-AM-compliance mediation suggests other C-HIP elements more directly explain the HS design-compliance effects.</p><p><strong>Application: </strong>These results provide more evidence that the communication of Health, Environment, and Safety information in <i>procedures</i> may need to be different from those on consumer products, suggesting design efficacy may be task dependent.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"1981-1994"},"PeriodicalIF":2.9,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136400551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2024-08-01Epub Date: 2023-08-26DOI: 10.1177/00187208231197347
Tobias Rieger, Luisa Kugler, Dietrich Manzey, Eileen Roesler
{"title":"The (Im)perfect Automation Schema: Who Is Trusted More, Automated or Human Decision Support?","authors":"Tobias Rieger, Luisa Kugler, Dietrich Manzey, Eileen Roesler","doi":"10.1177/00187208231197347","DOIUrl":"10.1177/00187208231197347","url":null,"abstract":"<p><strong>Objective: </strong>This study's purpose was to better understand the dynamics of trust attitude and behavior in human-agent interaction.</p><p><strong>Background: </strong>Whereas past research provided evidence for a perfect automation schema, more recent research has provided contradictory evidence.</p><p><strong>Method: </strong>To disentangle these conflicting findings, we conducted an online experiment using a simulated medical X-ray task. We manipulated the framing of support agents (i.e., artificial intelligence (AI) versus expert versus novice) between-subjects and failure experience (i.e., perfect support, imperfect support, back-to-perfect support) within subjects. Trust attitude and behavior as well as perceived reliability served as dependent variables.</p><p><strong>Results: </strong>Trust attitude and perceived reliability were higher for the human expert than for the AI than for the human novice. Moreover, the results showed the typical pattern of trust formation, dissolution, and restoration for trust attitude and behavior as well as perceived reliability. Forgiveness after failure experience did not differ between agents.</p><p><strong>Conclusion: </strong>The results strongly imply the existence of an imperfect automation schema. This illustrates the need to consider agent expertise for human-agent interaction.</p><p><strong>Application: </strong>When replacing human experts with AI as support agents, the challenge of lower trust attitude towards the novel agent might arise.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"1995-2007"},"PeriodicalIF":2.9,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10131069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2024-08-01Epub Date: 2023-09-12DOI: 10.1177/00187208231200721
Daniel Sousa Schulman, Nishant Jalgaonkar, Sneha Ojha, Ana Rivero Valles, Monica L H Jones, Shorya Awtar
{"title":"A Visual-Vestibular Model to Predict Motion Sickness for Linear and Angular Motion.","authors":"Daniel Sousa Schulman, Nishant Jalgaonkar, Sneha Ojha, Ana Rivero Valles, Monica L H Jones, Shorya Awtar","doi":"10.1177/00187208231200721","DOIUrl":"10.1177/00187208231200721","url":null,"abstract":"<p><strong>Objective: </strong>This study proposed a model to predict passenger motion sickness under the presence of a visual-vestibular conflict and assessed its performance with respect to previously recorded experimental data.</p><p><strong>Background: </strong>While several models have been shown useful to predict motion sickness under repetitive motion, improvements are still desired in terms of predicting motion sickness in realistic driving conditions. There remains a need for a model that considers angular and linear visual-vestibular motion inputs in three dimensions to improve prediction of passenger motion sickness.</p><p><strong>Method: </strong>The model combined the subjective vertical conflict theory and human motion perception models. The proposed model integrates visual and vestibular sensed 6 DoF motion signals in a novel architecture.</p><p><strong>Results: </strong>Model prediction results were compared to motion sickness data obtained from studies conducted in motion simulators as well as on-road vehicle testing, yielding trends that are congruent with observed results in both cases.</p><p><strong>Conclusion: </strong>The model demonstrated the ability to predict trends in motion sickness response for conditions in which a passenger performs a task on a handheld device versus facing forward looking ahead under realistic driving conditions. However, further analysis across a larger population is necessary to better assess the model's performance.</p><p><strong>Application: </strong>The proposed model can be used as a tool to predict motion sickness under different levels of visual-vestibular conflict. This can be leveraged to design interventions capable of mitigating passenger motion sickness. Further, this model can provide insights that aid in the development of passenger experiences inside autonomous vehicles.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2120-2137"},"PeriodicalIF":2.9,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10571712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2024-08-01Epub Date: 2023-10-04DOI: 10.1177/00187208231204704
Satyajit Upasani, Divya Srinivasan, Qi Zhu, Jing Du, Alexander Leonessa
{"title":"Eye-Tracking in Physical Human-Robot Interaction: Mental Workload and Performance Prediction.","authors":"Satyajit Upasani, Divya Srinivasan, Qi Zhu, Jing Du, Alexander Leonessa","doi":"10.1177/00187208231204704","DOIUrl":"10.1177/00187208231204704","url":null,"abstract":"<p><strong>Background: </strong>In Physical Human-Robot Interaction (pHRI), the need to learn the robot's motor-control dynamics is associated with increased cognitive load. Eye-tracking metrics can help understand the dynamics of fluctuating mental workload over the course of learning.</p><p><strong>Objective: </strong>The aim of this study was to test eye-tracking measures' sensitivity and reliability to variations in task difficulty, as well as their performance-prediction capability, in physical human-robot collaboration tasks involving an industrial robot for object comanipulation.</p><p><strong>Methods: </strong>Participants (9M, 9F) learned to coperform a virtual pick-and-place task with a bimanual robot over multiple trials. Joint stiffness of the robot was manipulated to increase motor-coordination demands. The psychometric properties of eye-tracking measures and their ability to predict performance was investigated.</p><p><strong>Results: </strong>Stationary Gaze Entropy and pupil diameter were the most reliable and sensitive measures of workload associated with changes in task difficulty and learning. Increased task difficulty was more likely to result in a robot-monitoring strategy. Eye-tracking measures were able to predict the occurrence of success or failure in each trial with 70% sensitivity and 71% accuracy.</p><p><strong>Conclusion: </strong>The sensitivity and reliability of eye-tracking measures was acceptable, although values were lower than those observed in cognitive domains. Measures of gaze behaviors indicative of visual monitoring strategies were most sensitive to task difficulty manipulations, and should be explored further for the pHRI domain where motor-control and internal-model formation will likely be strong contributors to workload.</p><p><strong>Application: </strong>Future collaborative robots can adapt to human cognitive state and skill-level measured using eye-tracking measures of workload and visual attention.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2104-2119"},"PeriodicalIF":2.9,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41180495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2024-08-01Epub Date: 2023-11-09DOI: 10.1177/00187208231206324
Peter Le, Emily H L Mills, Charles A Weisenbach, Kermit G Davis
{"title":"Neck Muscle Coactivation Response to Varied Levels of Mental Workload During Simulated Flight Tasks.","authors":"Peter Le, Emily H L Mills, Charles A Weisenbach, Kermit G Davis","doi":"10.1177/00187208231206324","DOIUrl":"10.1177/00187208231206324","url":null,"abstract":"<p><strong>Objective: </strong>To evaluate neck muscle coactivation across different levels of mental workload during simulated flight tasks.</p><p><strong>Background: </strong>Neck pain (NP) is highly prevalent among military aviators. Given the complex nature within the flight environment, mental workload may be a risk factor for NP. This may induce higher levels of neck muscle coactivity, which over time may accelerate fatigue, increase neck discomfort, and affect flight task performance.</p><p><strong>Method: </strong>Three counterbalanced mental workload conditions represented by simulated flight tasks modulated by interstimulus frequency and complexity were investigated using the Modifiable Multitasking Environment (ModME). The primary measure was a neck coactivation index to describe the neuromuscular effort of the neck muscles as a system. Additional measures included perceived workload (NASA TLX), subjective discomfort, and task performance. Participants (<i>n</i> = 60; 30M, 30F) performed three test conditions over 1 hr each while seated in a simulated seating environment.</p><p><strong>Results: </strong>Neck coactivation indices (CoA) and subjective neck discomfort corresponded with increasing level of mental workload. Average CoAs for low, medium, and high workloads were: .0278(SD = .0232), .0286(SD = .0231), and .0295(SD = .0228), respectively. NASA TLX mental, temporal, effort, and overall scores also increased with the level of mental workload assigned. For ModME task performance, the overall performance score, monitoring accuracy, and resource management accuracy decreased while reaction times increased with the increasing level of mental workload. Communication accuracy was lowest with the low mental workload but had higher reaction times relative to increasing workload.</p><p><strong>Conclusion: </strong>Mental workload affects neck muscle coactivation during combinations of simulated flight tasks within a simulated helicopter seating environment.</p><p><strong>Application: </strong>The results of this study provide insights into the physical response to mental workload. With increasing multisensory modalities within the work environment, these insights may assist the consideration of physical effects from cognitive factors.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2041-2056"},"PeriodicalIF":2.9,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71523529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2024-08-01Epub Date: 2023-11-09DOI: 10.1177/00187208231204567
Amelia C Warden, Christopher D Wickens, Benjamin A Clegg, Daniel Rehberg, Francisco R Ortega
{"title":"Information Access Effort: The Role of Head Movements for Information Presented at Increasing Eccentricity on Flat Panel and Head-Mounted Displays.","authors":"Amelia C Warden, Christopher D Wickens, Benjamin A Clegg, Daniel Rehberg, Francisco R Ortega","doi":"10.1177/00187208231204567","DOIUrl":"10.1177/00187208231204567","url":null,"abstract":"<p><strong>Objective: </strong>This experiment examined performance costs when processing two sources of information positioned at increasing distances using a flat panel display and an augmented reality head-mounted display (AR-HMD).</p><p><strong>Background: </strong>The AR-HMD enables positioning virtual information at various distances in space. However, the proximity compatibility principle suggests that closer separation when two sources of information require mental integration assists performance, whereas increased separation between two sources hurts integration performance more than when a single source requires focused attention. Previous studies have provided inconsistent findings regarding costs associated with increased separation. Few of these experiments have examined separation for both focused and integration tasks, compared vertical and lateral separation, or measured head movements.</p><p><strong>Method: </strong>Three experiments collectively examined these issues using a flat panel display and a virtual display presented with an HMD, where the separation of information varied laterally or vertically during a focused attention (digit reading) task and an information integration (mental subtraction) task.</p><p><strong>Results: </strong>There was no performance cost for either display when information was increasingly separated. However, head movements mitigated performance costs by preserving accuracy at larger separations without increasing response time.</p><p><strong>Conclusion: </strong>Head movements appear to mitigate performance costs associated with presenting information increasingly far apart on flat panel displays and HMDs. Both eye scanning and head movements appear to be less effortful than expected.</p><p><strong>Application: </strong>These findings have important implications for design guidelines regarding the placement of information presented on flat panel displays and, more specifically, HMDs, which can present information 360° around the user.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2057-2081"},"PeriodicalIF":2.9,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71523528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2024-08-01Epub Date: 2023-08-27DOI: 10.1177/00187208231196738
Isabella Gegoff, Monica Tatasciore, Vanessa Bowden, Jason McCarley, Shayne Loft
{"title":"Transparent Automated Advice to Mitigate the Impact of Variation in Automation Reliability.","authors":"Isabella Gegoff, Monica Tatasciore, Vanessa Bowden, Jason McCarley, Shayne Loft","doi":"10.1177/00187208231196738","DOIUrl":"10.1177/00187208231196738","url":null,"abstract":"<p><strong>Objective: </strong>To examine the extent to which increased automation transparency can mitigate the potential negative effects of low and high automation reliability on disuse and misuse of automated advice, and perceived trust in automation.</p><p><strong>Background: </strong>Automated decision aids that vary in the reliability of their advice are increasingly used in workplaces. Low-reliability automation can increase disuse of automated advice, while high-reliability automation can increase misuse. These effects could be reduced if the rationale underlying automated advice is made more transparent.</p><p><strong>Methods: </strong>Participants selected the optimal UV to complete missions. The Recommender (automated decision aid) assisted participants by providing advice; however, it was not always reliable. Participants determined whether the Recommender provided accurate information and whether to accept or reject advice. The level of automation transparency (medium, high) and reliability (low: 65%, high: 90%) were manipulated between-subjects.</p><p><strong>Results: </strong>With high- compared to low-reliability automation, participants made more accurate (correctly accepted advice <i>and</i> identified whether information was accurate/inaccurate) and faster decisions, and reported increased trust in automation. Increased transparency led to more accurate and faster decisions, lower subjective workload, and higher usability ratings. It also eliminated the increased automation disuse associated with low-reliability automation. However, transparency did not mitigate the misuse associated with high-reliability automation.</p><p><strong>Conclusion: </strong>Transparency protected against low-reliability automation disuse, but not against the increased misuse potentially associated with the reduced monitoring and verification of high-reliability automation.</p><p><strong>Application: </strong>These outcomes can inform the design of transparent automation to improve human-automation teaming under conditions of varied automation reliability.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2008-2024"},"PeriodicalIF":2.9,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141097/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10139257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2024-08-01Epub Date: 2023-09-21DOI: 10.1177/00187208231202572
Cindy Candrian, Anne Scherer
{"title":"How Terminology Affects Users' Responses to System Failures.","authors":"Cindy Candrian, Anne Scherer","doi":"10.1177/00187208231202572","DOIUrl":"10.1177/00187208231202572","url":null,"abstract":"<p><strong>Objective: </strong>The objective of our research is to advance the understanding of behavioral responses to a system's error. By examining trust as a dynamic variable and drawing from attribution theory, we explain the underlying mechanism and suggest how terminology can be used to mitigate the so-called algorithm aversion. In this way, we show that the use of different terms may shape consumers' perceptions and provide guidance on how these differences can be mitigated.</p><p><strong>Background: </strong>Previous research has interchangeably used various terms to refer to a system and results regarding trust in systems have been ambiguous.</p><p><strong>Methods: </strong>Across three studies, we examine the effect of different system terminology on consumer behavior following a system failure.</p><p><strong>Results: </strong>Our results show that terminology crucially affects user behavior. Describing a system as \"AI\" (i.e., self-learning and perceived as more complex) instead of as \"algorithmic\" (i.e., a less complex rule-based system) leads to more favorable behavioral responses by users when a system error occurs.</p><p><strong>Conclusion: </strong>We suggest that in cases when a system's characteristics do not allow for it to be called \"AI,\" users should be provided with an explanation of why the system's error occurred, and task complexity should be pointed out. We highlight the importance of terminology, as this can unintentionally impact the robustness and replicability of research findings.</p><p><strong>Application: </strong>This research offers insights for industries utilizing AI and algorithmic systems, highlighting how strategic terminology use can shape user trust and response to errors, thereby enhancing system acceptance.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2082-2103"},"PeriodicalIF":2.9,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141081/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41168304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}