Human FactorsPub Date : 2025-03-03DOI: 10.1177/00187208251323132
Nicola Vasta, Francesco Biondi
{"title":"Effect of Partially Automated Driving on Mental Workload, Visual Behavior and Engagement in Nondriving-Related Tasks: A Meta-Analysis.","authors":"Nicola Vasta, Francesco Biondi","doi":"10.1177/00187208251323132","DOIUrl":"https://doi.org/10.1177/00187208251323132","url":null,"abstract":"<p><strong>Objective: </strong>The goal of this meta-analysis is to investigate the effect of partial automation on mental workload, visual behavior, and engagement in nondriving-related tasks.</p><p><strong>Background: </strong>The literature on the human factors of operating partially automated driving offers mixed findings. While some studies show partial driving automation to result in suboptimal mental workload, others found it to impose similar levels of workload to the ones observed during manual driving. Likewise, while some studies evidence a marked increase in off-road glances when the automated system was engaged, other work has failed to replicate this pattern.</p><p><strong>Method: </strong>41 studies involving 1482 participants were analyzed using the PRISMA approach.</p><p><strong>Results: </strong>No significant differences in mental workload were found between manual and partially automated driving, indicating no changes in mental workload between the two driving modes. A higher likelihood of glancing away from the forward roadway and engaging in nondriving-related tasks was found when the partially automated system was engaged.</p><p><strong>Conclusion: </strong>Although the adoption of partial driving automation comes with some intended safety benefits, its use is also associated with an increased engagement in nondriving-related activities.</p><p><strong>Application: </strong>These findings add to our understanding of the safety of partial automation and provide valuable information to Human Factors practitioners and regulators about the use and potential safety risks of using these systems in the real-world.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"187208251323132"},"PeriodicalIF":2.9,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2025-03-01Epub Date: 2024-08-23DOI: 10.1177/00187208241272070
Yee Mun Lee, Vladislav Sidorov, Ruth Madigan, Jorge Garcia de Pedro, Gustav Markkula, Natasha Merat
{"title":"Hello, is it me you're Stopping for? The Effect of external Human Machine Interface Familiarity on Pedestrians' Crossing Behaviour in an Ambiguous Situation.","authors":"Yee Mun Lee, Vladislav Sidorov, Ruth Madigan, Jorge Garcia de Pedro, Gustav Markkula, Natasha Merat","doi":"10.1177/00187208241272070","DOIUrl":"10.1177/00187208241272070","url":null,"abstract":"<p><strong>Objective: </strong>We investigated how different deceleration intentions (i.e. an automated vehicle either decelerated for leading traffic or yielded for pedestrians) and a novel (Slow Pulsing Light Band - SPLB) or familiar (Flashing Headlights - FH) external Human Machine Interface (eHMI) informed pedestrians' crossing behaviour.</p><p><strong>Background: </strong>The introduction of SAE Level 4 Automated Vehicles (AVs) has recently fuelled interest in new forms of explicit communication via eHMIs, to improve the interaction between AVs and surrounding road users. Before implementing these eHMIs, it is necessary to understand how pedestrians use them to inform their crossing decisions.</p><p><strong>Method: </strong>Thirty participants took part in the study using a Head-Mounted Display. The independent variables were deceleration intentions and eHMI design. The percentage of crossings, collision frequency and crossing initiation time across trials were measured.</p><p><strong>Results: </strong>Pedestrians were able to identify the intentions of a decelerating vehicle, using implicit cues, with more crossings made when the approaching vehicles were yielding to them. They were also more likely to cross when a familiar eHMI was presented, compared to a novel one or no eHMI, regardless of the vehicle's intention. Finally, participants learned to take a more cautious approach as trials progressed, and not to base their decisions solely on the eHMI.</p><p><strong>Conclusion: </strong>A familiar eHMI led to early crossings regardless of the vehicle's intention but also led to a higher collision frequency than a novel eHMI.</p><p><strong>Application: </strong>To achieve safe and acceptable interactions with AVs, it is important to provide eHMIs that are congruent with road users' expectations.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"264-279"},"PeriodicalIF":2.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11734357/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142044209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2025-03-01Epub Date: 2024-08-23DOI: 10.1177/00187208241274040
Joel M Cooper, David L Strayer
{"title":"Multitasking Induced Contextual Blindness.","authors":"Joel M Cooper, David L Strayer","doi":"10.1177/00187208241274040","DOIUrl":"10.1177/00187208241274040","url":null,"abstract":"<p><strong>Objective: </strong>To examine the impact of secondary task performance on contextual blindness arising from the suppression and masking of temporal and spatial sequence learning.</p><p><strong>Background: </strong>Dual-task scenarios can lead to a diminished ability to use environmental cues to guide attention, a phenomenon that is related to multitasking-induced inattentional blindness. This research aims to extend the theoretical understanding of how secondary tasks can impair attention and memory processes in sequence learning and access.</p><p><strong>Method: </strong>We conducted three experiments. In Experiment 1, we used a serial reaction time task to investigate the impact of a secondary tone counting task on temporal sequence learning. In Experiment 2, we used a contextual cueing task to examine the effects of dual-task performance on spatial cueing. In Experiment 3, we integrated and extended these concepts to a simulated driving task.</p><p><strong>Results: </strong>Across the experiments, the performance of a secondary task consistently suppressed (all experiments) and masked task learning (experiments 1 and 3). In the serial response and spatial search tasks, dual-task conditions reduced the accrual of sequence knowledge and impaired knowledge expression. In the driving simulation, similar patterns of learning suppression from multitasking were also observed.</p><p><strong>Conclusion: </strong>The findings suggest that secondary tasks can significantly suppress and mask sequence learning in complex tasks, leading to a form of <i>contextual blindness</i> characterized by impairments in the ability to use environmental cues to guide attention and anticipate future events.</p><p><strong>Application: </strong>These findings have implications for both skill acquisition and skilled performance in complex domains such as driving, aviation, manufacturing, and human-computer interaction.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"225-245"},"PeriodicalIF":2.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142044210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2025-03-01Epub Date: 2024-08-19DOI: 10.1177/00187208241272060
Jad A Atweh, Sara L Riggs
{"title":"Gaze Sharing, a Double-Edged Sword: Examining the Effect of Real-Time Gaze Sharing Visualizations on Team Performance and Situation Awareness.","authors":"Jad A Atweh, Sara L Riggs","doi":"10.1177/00187208241272060","DOIUrl":"10.1177/00187208241272060","url":null,"abstract":"<p><strong>Objective: </strong>The goal of this study was to assess how different real-time gaze sharing visualization techniques affect eye tracking metrics, workload, team situation awareness (TSA), and team performance.</p><p><strong>Background: </strong>Gaze sharing is a real-time visualization technique that allows teams to know where their team members are looking on a shared display. Gaze sharing visualization techniques are a promising means to improve collaborative performance on simple tasks; however, there needs to be validation of gaze sharing with more complex and dynamic tasks.</p><p><strong>Methods: </strong>This study evaluated the effect of gaze sharing on eye tracking metrics, workload, team SA, and team performance in a simulated unmanned aerial vehicle (UAV) command-and-control task. Thirty-five teams of two performed UAV tasks under three conditions: one with no gaze sharing and two with gaze sharing. Gaze sharing was presented using a fixation dot (i.e., a translucent colored dot) and a fixation trail (i.e., a trail of the most recent fixations).</p><p><strong>Results: </strong>The results showed that the fixation trail significantly reduced saccadic activity, lowered workload, supported team SA at all levels, and improved performance compared to no gaze sharing; however, the fixation dot had the opposite effect on performance and SA. In fact, having no gaze sharing outperformed the fixation dot. Participants also preferred the fixation trail for its visibility and ability to track and monitor the history of their partner's gaze.</p><p><strong>Conclusion: </strong>The results showed that gaze sharing has the potential to support collaboration, but its effectiveness depends highly on the design and context of use.</p><p><strong>Application: </strong>The findings suggest that gaze sharing visualization techniques, like the fixation trail, have the potential to improve teamwork in complex UAV tasks and could have broader applicability in a variety of collaborative settings.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"196-224"},"PeriodicalIF":2.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142001477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2025-03-01Epub Date: 2024-08-08DOI: 10.1177/00187208241272066
Anna Konstant, Nitzan Orr, Michael Hagenow, Isabelle Gundrum, Yu Hen Hu, Bilge Mutlu, Michael Zinn, Michael Gleicher, Robert G Radwin
{"title":"Human-Robot Collaboration With a Corrective Shared Controlled Robot in a Sanding Task.","authors":"Anna Konstant, Nitzan Orr, Michael Hagenow, Isabelle Gundrum, Yu Hen Hu, Bilge Mutlu, Michael Zinn, Michael Gleicher, Robert G Radwin","doi":"10.1177/00187208241272066","DOIUrl":"10.1177/00187208241272066","url":null,"abstract":"<p><strong>Objective: </strong>Physical and cognitive workloads and performance were studied for a corrective shared control (CSC) human-robot collaborative (HRC) sanding task.</p><p><strong>Background: </strong>Manual sanding is physically demanding. Collaborative robots (cobots) can potentially reduce physical stress, but fully autonomous implementation has been particularly challenging due to skill, task variability, and robot limitations. CSC is an HRC method where the robot operates semi-autonomously while the human provides real-time corrections.</p><p><strong>Methods: </strong>Twenty laboratory participants removed paint using an orbital sander, both manually and with a CSC robot. A fully automated robot was also tested.</p><p><strong>Results: </strong>The CSC robot improved subjective discomfort compared to manual sanding in the upper arm by 29.5%, lower arm by 32%, hand by 36.5%, front of the shoulder by 24%, and back of the shoulder by 17.5%. Muscle fatigue measured using EMG, was observed in the medial deltoid and flexor carpi radialis for the manual condition. The composite cognitive workload on the NASA-TLX increased by 14.3% for manual sanding due to high physical demand and effort, while mental demand was 14% greater for the CSC robot. Digital imaging showed that the CSC robot outperformed the automated condition by 7.16% for uniformity, 4.96% for quantity, and 6.06% in total.</p><p><strong>Conclusions: </strong>In this example, we found that human skills and techniques were integral to sanding and can be successfully incorporated into HRC systems. Humans performed the task using the CSC robot with less fatigue and discomfort.</p><p><strong>Applications: </strong>The results can influence implementation of future HRC systems in manufacturing environments.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"246-263"},"PeriodicalIF":2.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141908501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2025-02-27DOI: 10.1177/00187208251323101
Natalie Griffiths, Vanessa K Bowden, Serena Wee, Shayne Loft
{"title":"Predicting Return-to-Manual Performance in Lower- and Higher-Degree Automation.","authors":"Natalie Griffiths, Vanessa K Bowden, Serena Wee, Shayne Loft","doi":"10.1177/00187208251323101","DOIUrl":"https://doi.org/10.1177/00187208251323101","url":null,"abstract":"<p><strong>Objective: </strong>To examine operator state variables (workload, fatigue, trust in automation, task engagement) that potentially predict return-to-manual (RTM) performance after automation fails to complete a task action.</p><p><strong>Background: </strong>Limited research has examined the extent to which within-person variability in operator states predicts RTM performance, a prerequisite to adapting work systems based on expected performance degradation/operator strain. We examine whether operator states differentially predict RTM performance as a function of degree of automation (DOA).</p><p><strong>Method: </strong>Participants completed a simulated air traffic control task. Conflict detection was assisted by either a higher- or lower-DOA. When automation failed to resolve a conflict, participants needed to prevent that conflict (i.e., RTM). Participants' self-reported workload, fatigue, trust in automation, and task engagement were periodically measured.</p><p><strong>Results: </strong>Participants using lower DOA were faster to resolve conflicts (RTM RT) missed by automation than those using higher DOA. DOA did not moderate the relationship between operator states and RTM performance. Collapsed across DOA, increased workload (relative to participants' own average) and increased fatigue (relative to sample average, or relative to own average) led to the resolution of fewer conflicts missed by automation (poorer RTM accuracy). Participants with higher trust (relative to own average) had higher RTM accuracy.</p><p><strong>Conclusions: </strong>Variation in operator state measures of workload, fatigue, and trust can predict RTM performance. However, given some identified inconsistency in which states are predictive across studies, further research is needed.</p><p><strong>Applications: </strong>Adaptive work systems could be designed to respond to vulnerable operator states to minimise RTM performance decrements.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"187208251323101"},"PeriodicalIF":2.9,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2025-02-24DOI: 10.1177/00187208251320907
Jeehan Malik, Elizabeth O'Neal, Megan Noonan, Iman Noferesti, Nam-Yoon Kim, William Pixley, Jodie M Plumert, Joseph K Kearney
{"title":"Do Augmented Reality Cues Aid Pedestrians in Crossing Multiple Lanes of Traffic? A Virtual Reality Study.","authors":"Jeehan Malik, Elizabeth O'Neal, Megan Noonan, Iman Noferesti, Nam-Yoon Kim, William Pixley, Jodie M Plumert, Joseph K Kearney","doi":"10.1177/00187208251320907","DOIUrl":"https://doi.org/10.1177/00187208251320907","url":null,"abstract":"<p><strong>Objective: </strong>This study evaluated whether pedestrians can use augmented reality (AR) overlays to guide their road-crossing decisions when crossing two lanes of opposing traffic.</p><p><strong>Background: </strong>Emerging technologies for enhancing traffic safety often focus on alerting drivers to hazards. Less attention has been given to understanding how pedestrians respond to technology designed to aid their road-crossing decisions, particularly in more complex traffic.</p><p><strong>Method: </strong>Participants repeatedly crossed two lanes of opposing traffic displayed in a virtual reality system. Participants in the AR condition viewed matching-colored bars (AR overlays) suspended just above the gaps between cars where there was sufficient time to safely cross a pair of near and far lane gaps. Participants in the control condition performed the same road-crossing task but saw no AR overlays.</p><p><strong>Results: </strong>Participants who viewed AR cues were more likely than participants who did not view AR cues to accept gap pairs classified as crossable and less likely to accept gap pairs classified as uncrossable. However, there was no difference between the AR and control conditions in time to spare when exiting the roadway. NASA Task Load Index (2020) responses indicated that perceived performance was higher and perceived frustration was lower in the AR than control condition, but perceived workload was higher in the AR condition.</p><p><strong>Conclusion: </strong>The AR cues helped participants identify crossable gap pairs but did not lead to greater time to spare when exiting the roadway.</p><p><strong>Application: </strong>These results show both the promise and risks of assistive technologies designed to increase pedestrian safety in more complex traffic situations.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"187208251320907"},"PeriodicalIF":2.9,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143484971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2025-02-19DOI: 10.1177/00187208251320179
Xin Xin, Xinyuan Chen, Wei Liu
{"title":"Effects of Auditory Anticipatory Cues and Lead Time on Visually Induced Motion Sickness.","authors":"Xin Xin, Xinyuan Chen, Wei Liu","doi":"10.1177/00187208251320179","DOIUrl":"https://doi.org/10.1177/00187208251320179","url":null,"abstract":"<p><strong>Objective: </strong>This study aims to investigate the ability of auditory cues for predicting motion and lead times to mitigate visually induced motion sickness (VIMS).</p><p><strong>Background: </strong>The vehicle information systems predominantly utilize visual displays, which can introduce conflicts between visual and vestibular motion cues, potentially resulting in VIMS. In these scenarios, auditory cues may provide a viable solution, especially when visual cues are diminished by fatigue or distractions.</p><p><strong>Methods: </strong>In two distinct studies, a total of 180 participants were involved in investigating the impact of auditory cues on VIMS. In Study 1, participants were categorized based on the type of auditory cue they received (speech, nonspeech, or no-cue). Study 2 examined the effects of three different lead times (1 s, 2 s, and 3 s) between the activation of the auditory cue and the occurrence of car braking or turning in nonspeech conditions. VIMS severity was assessed with the Simulator Sickness Questionnaire (SSQ) before and after the simulation phase.</p><p><strong>Results: </strong>Nonspeech cues significantly reduced VIMS compared to speech or no-cue. VIMS was notably lower with a 2 s lead time than with 1 s or 3 s lead times, and females reported higher levels of VIMS than males.</p><p><strong>Conclusion: </strong>Results across two studies suggest using nonspeech cues with a 2-second lead time to reduce VIMS. It also recommends investigating the effects of duration, tone, and voice frequency. Furthermore, the study proposes extensive research into lead time settings for various scenarios such as driving fatigue, hillside roads, and traffic congestion.</p><p><strong>Application: </strong>These findings offer potential value in designing auditory cues to reduce VIMS in autonomous driving, simulators, VR games, and films.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"187208251320179"},"PeriodicalIF":2.9,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143451140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2025-02-12DOI: 10.1177/00187208251320589
Tobias Rieger, Benita Marx, Dietrich Manzey
{"title":"Likelihood Systems Can Improve Hit Rates in Low-Prevalence Visual Search Over Binary Systems.","authors":"Tobias Rieger, Benita Marx, Dietrich Manzey","doi":"10.1177/00187208251320589","DOIUrl":"https://doi.org/10.1177/00187208251320589","url":null,"abstract":"<p><strong>Objective: </strong>To study the performance consequences of binary versus likelihood decision support systems in low-prevalence visual search.</p><p><strong>Background: </strong>Hit rates in visual search are often low if target prevalence is low, an issue that is relevant for numerous real-world visual search tasks (e.g., luggage screening and medical imaging). Given that binary decision support systems produce many false alarms at low prevalence, they have often been discounted as a solution to this low-prevalence problem. By offering additional information about the certainty of target-present indications through splitting these into warnings and alarms, likelihood-based systems could potentially boost hit rates without raising the number of false alarms.</p><p><strong>Method: </strong>We used a simulated medical search task with low target prevalence in a paradigm where participants sequentially uncovered parts of the stimulus with their mouse. In two sessions, participants completed the task either while being supported by a binary or a likelihood system.</p><p><strong>Results: </strong>Hit rates were higher when interacting with the likelihood systems than with the binary system, at no cost of higher false alarms.</p><p><strong>Conclusion: </strong>Likelihood systems are a promising way to tackle the low-prevalence problem, and might further be an effective means to make systems more transparent.</p><p><strong>Application: </strong>Simple-to-process information about system certainty for each case might be a solution to low hit rates in domains with low target prevalence, such as radiology.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"187208251320589"},"PeriodicalIF":2.9,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143399880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2025-02-03DOI: 10.1177/00187208251317470
Yossef Saad, Joachim Meyer
{"title":"Context-Based Human Influence and Causal Responsibility for Assisted Decision-Making.","authors":"Yossef Saad, Joachim Meyer","doi":"10.1177/00187208251317470","DOIUrl":"https://doi.org/10.1177/00187208251317470","url":null,"abstract":"<p><strong>Objective: </strong>The impact of the context in which automation is introduced to a decision-making system was analyzed theoretically and empirically.</p><p><strong>Background: </strong>Previous work dealt with causality and responsibility in human-automation systems without considering the effects of how the automation's role is presented to users.</p><p><strong>Methods: </strong>An existing analytical model for predicting the human contribution to outcomes was adapted to accommodate the context of automation. An aided signal detection experiment with 400 participants was conducted to assess the correspondence of observed behavior to model predictions.</p><p><strong>Results: </strong>The context in which the automation's role is presented affected users' tendency to follow its advice. When automation made decisions, and users only supervised it, they tended to contribute less to the outcome than in systems where the automation had an advisory capacity. The adapted theoretical model for human contribution was generally aligned with participants' behavior.</p><p><strong>Conclusion: </strong>The specific way automation is integrated into a system affects its use and the perceptions of user involvement, possibly altering overall system performance.</p><p><strong>Application: </strong>The research can help design systems with automation-assisted decision-making and provide information on regulatory requirements and operational processes for such systems.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"187208251317470"},"PeriodicalIF":2.9,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143124276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}