Human FactorsPub Date : 2024-09-01Epub Date: 2023-09-21DOI: 10.1177/00187208231198932
Eileen Herbers, Marty Miller, Luke Neurauter, Jacob Walters, Daniel Glaser
{"title":"Exploratory Development of Algorithms for Determining Driver Attention Status.","authors":"Eileen Herbers, Marty Miller, Luke Neurauter, Jacob Walters, Daniel Glaser","doi":"10.1177/00187208231198932","DOIUrl":"10.1177/00187208231198932","url":null,"abstract":"<p><strong>Objective: </strong>Varying driver distraction algorithms were developed using vehicle kinematics and driver gaze data obtained from a camera-based driver monitoring system (DMS).</p><p><strong>Background: </strong>Distracted driving characteristics can be difficult to accurately detect due to wide variation in driver behavior across driving environments. The growing availability of information about drivers and their involvement in the driving task increases the opportunity for accurately recognizing attention state.</p><p><strong>Method: </strong>A baseline for driver distraction levels was developed using a video feed of 24 separate drivers in varying naturalistic driving conditions. This initial assessment was used to develop four buffer-based algorithms that aimed to determine a driver's real-time attentiveness, via a variety of metrics and combinations thereof.</p><p><strong>Results: </strong>Of those tested, the optimal algorithm included ungrouped glance locations and speed. Notably, as an algorithm's performance of detecting very distracted drivers improved, its accuracy for correctly identifying attentive drivers decreased.</p><p><strong>Conclusion: </strong>At a minimum, drivers' gaze position and vehicle speed should be included when designing driver distraction algorithms to delineate between glance patterns observed at high and low speeds. Distraction algorithms should be designed with an understanding of their limitations, including instances in which they may fail to detect distracted drivers, or falsely notify attentive drivers.</p><p><strong>Application: </strong>This research adds to the body of knowledge related to driver distraction and contributes to available methods to potentially address and reduce occurrences. Machine learning algorithms can build on the data elements discussed to increase distraction detection accuracy using robust artificial intelligence.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2191-2204"},"PeriodicalIF":2.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41152711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2024-09-01Epub Date: 2024-08-06DOI: 10.1177/00187208241274639
{"title":"Corrigendum to the Preface for the Special Section on Driver Monitoring Systems.","authors":"","doi":"10.1177/00187208241274639","DOIUrl":"10.1177/00187208241274639","url":null,"abstract":"","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2264"},"PeriodicalIF":2.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11451305/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141898998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Driving Aggressively or Conservatively? Investigating the Effects of Automated Vehicle Interaction Type and Road Event on Drivers' Trust and Preferred Driving Style.","authors":"Yuni Lee, Miaomiao Dong, Vidya Krishnamoorthy, Kumar Akash, Teruhisa Misu, Zhaobo Zheng, Gaojian Huang","doi":"10.1177/00187208231181199","DOIUrl":"10.1177/00187208231181199","url":null,"abstract":"<p><strong>Objective: </strong>This study aimed to investigate the impact of automated vehicle (AV) interaction mode on drivers' trust and preferred driving styles in response to pedestrian- and traffic-related road events.</p><p><strong>Background: </strong>The rising popularity of AVs highlights the need for a deeper understanding of the factors that influence trust in AV. Trust is a crucial element, particularly because current AVs are only partially automated and may require manual takeover; miscalibrated trust could have an adverse effect on safe driver-vehicle interaction. However, before attempting to calibrate trust, it is vital to comprehend the factors that contribute to trust in automation.</p><p><strong>Methods: </strong>Thirty-six individuals participated in the experiment. Driving scenarios incorporated adaptive SAE Level 2 AV algorithms, driven by participants' event-based trust in AVs and preferences for AV driving styles. The study measured participants' trust, preferences, and the number of takeover behaviors.</p><p><strong>Results: </strong>Higher levels of trust and preference for more aggressive AV driving styles were found in response to pedestrian-related events compared to traffic-related events. Furthermore, drivers preferred the trust-based adaptive mode and had fewer takeover behaviors than the preference-based adaptive and fixed modes. Lastly, participants with higher trust in AVs favored more aggressive driving styles and made fewer takeover attempts.</p><p><strong>Conclusion: </strong>Adaptive AV interaction modes that depend on real-time event-based trust and event types may represent a promising approach to human-automation interaction in vehicles.</p><p><strong>Application: </strong>Findings from this study can support future driver- and situation-aware AVs that can adapt their behavior for improved driver-vehicle interaction.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2166-2178"},"PeriodicalIF":2.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9591470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2024-09-01Epub Date: 2023-08-20DOI: 10.1177/00187208231194543
Megan Mulhall, Kyle Wilson, Shiyan Yang, Jonny Kuo, Tracey Sletten, Clare Anderson, Mark E Howard, Shantha Rajaratnam, Michelle Magee, Allison Collins, Michael G Lenné
{"title":"European NCAP Driver State Monitoring Protocols: Prevalence of Distraction in Naturalistic Driving.","authors":"Megan Mulhall, Kyle Wilson, Shiyan Yang, Jonny Kuo, Tracey Sletten, Clare Anderson, Mark E Howard, Shantha Rajaratnam, Michelle Magee, Allison Collins, Michael G Lenné","doi":"10.1177/00187208231194543","DOIUrl":"10.1177/00187208231194543","url":null,"abstract":"<p><strong>Objective: </strong>examine the prevalence of driver distraction in naturalistic driving when implementing European New Car Assessment Program (Euro NCAP)-defined distraction behaviours.</p><p><strong>Background: </strong>The 2023 introduction of Occupant Status monitoring (OSM) into Euro NCAP will accelerate uptake of Driver State Monitoring (DSM). Euro NCAP outlines distraction behaviours that DSM must detect to earn maximum safety points. Distraction behaviour prevalence and driver alerting and intervention frequency have yet to be examined in naturalistic driving.</p><p><strong>Method: </strong>Twenty healthcare workers were provided with an instrumented vehicle for approximately two weeks. Data were continuously monitored with automotive grade DSM during daily work commutes, resulting in 168.8 hours of driver head, eye and gaze tracking.</p><p><strong>Results: </strong>Single long distraction events were the most prevalent, with .89 events/hour. Implementing different thresholds for driving-related and driving-unrelated glance regions impacts alerting rates. Lizard glances (primarily gaze movement) occurred more frequently than owl glances (primarily head movement). Visual time-sharing events occurred at a rate of .21 events/hour.</p><p><strong>Conclusion: </strong>Euro NCAP-described driver distraction occurs naturalistically. Lizard glances, requiring gaze tracking, occurred in high frequency relative to owl glances, which only require head tracking, indicating that less sophisticated DSM will miss a substantial amount of distraction events.</p><p><strong>Application: </strong>This work informs OEMs, DSM manufacturers and regulators of the expected alerting rate of Euro NCAP defined distraction behaviours. Alerting rates will vary with protocol implementation, technology capability, and HMI strategies adopted by the OEMs, in turn impacting safety outcomes, user experience and acceptance of DSM technology.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2205-2217"},"PeriodicalIF":2.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10088980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2024-09-01Epub Date: 2023-11-20DOI: 10.1177/00187208231208523
Suzan Ayas, Birsen Donmez, Xing Tang
{"title":"Drowsiness Mitigation Through Driver State Monitoring Systems: A Scoping Review.","authors":"Suzan Ayas, Birsen Donmez, Xing Tang","doi":"10.1177/00187208231208523","DOIUrl":"10.1177/00187208231208523","url":null,"abstract":"<p><strong>Objective: </strong>To explore the scope of available research and to identify research gaps on in-vehicle interventions for drowsiness that utilize driver monitoring systems (DMS).</p><p><strong>Background: </strong>DMS are gaining popularity as a countermeasure against drowsiness. However, how these systems can be best utilized to guide driver attention is unclear.</p><p><strong>Methods: </strong>A scoping review was conducted in adherence to PRISMA guidelines. Five electronic databases (ACM Digital Library, Scopus, IEEE Xplore, TRID, and SAE Mobilus) were systematically searched in April 2022. Original studies examining in-vehicle drowsiness interventions that use DMS in a driving context (e.g., driving simulator and driver interviews) passed the screening. Data on study details, state detection methods, and interventions were extracted.</p><p><strong>Results: </strong>Twenty studies qualified for inclusion. Majority of interventions involved warnings (<i>n</i> = 16) with an auditory component (<i>n</i> = 14). Feedback displays (<i>n</i> = 4) and automation takeover (<i>n</i> = 4) were also investigated. Multistage interventions (<i>n</i> = 12) first cautioned the driver, then urged them to take an action, or initiated an automation takeover. Overall, interventions had a positive impact on sleepiness levels, driving performance, and user evaluations. Whether interventions effective for one type of sleepiness (e.g., passive vs. active fatigue) will perform well for another type is unclear.</p><p><strong>Conclusion: </strong>Literature mainly focused on developing sensors and improving the accuracy of DMS, but not on the driver interactions with these technologies. More intervention studies are needed in general and for investigating their long-term effects.</p><p><strong>Application: </strong>We list gaps and limitations in the DMS literature to guide researchers and practitioners in designing and evaluating effective safety systems for drowsy driving.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2218-2243"},"PeriodicalIF":2.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11344370/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138048943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2024-08-01Epub Date: 2023-09-26DOI: 10.1177/00187208231201054
Amy S McDonnell, Kaedyn W Crabtree, Joel M Cooper, David L Strayer
{"title":"This Is Your Brain on Autopilot 2.0: The Influence of Practice on Driver Workload and Engagement During On-Road, Partially Automated Driving.","authors":"Amy S McDonnell, Kaedyn W Crabtree, Joel M Cooper, David L Strayer","doi":"10.1177/00187208231201054","DOIUrl":"10.1177/00187208231201054","url":null,"abstract":"<p><strong>Objective: </strong>This on-road study employed behavioral and neurophysiological measurement techniques to assess the influence of six weeks of practice driving a Level 2 partially automated vehicle on driver workload and engagement.</p><p><strong>Background: </strong>Level 2 partial automation requires a driver to maintain supervisory control of the vehicle to detect \"edge cases\" that the automation is not equipped to handle. There is mixed evidence regarding whether drivers can do so effectively. There is also an open question regarding how practice and familiarity with automation influence driver cognitive states over time.</p><p><strong>Method: </strong>Behavioral and neurophysiological measures of driver workload and visual engagement were recorded from 30 participants at two testing sessions-with a six-week familiarization period in-between. At both testing sessions, participants drove a vehicle with partial automation engaged (Level 2) and not engaged (Level 0) on two interstate highways while reaction times to the detection response task (DRT) and neurophysiological (EEG) metrics of frontal theta and parietal alpha were recorded.</p><p><strong>Results: </strong>DRT results demonstrated that partially automated driving placed more cognitive load on drivers than manual driving and six weeks of practice decreased driver workload-though only when the driving environment was relatively simple. EEG metrics of frontal theta and parietal alpha showed null effects of partial automation.</p><p><strong>Conclusion: </strong>Driver workload was influenced by level of automation, specific highway characteristics, and by practice over time, but only on a behavioral level and not on a neural level.</p><p><strong>Application: </strong>These findings expand our understanding of the influence of practice on driver cognitive states under Level 2 partial automation.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2025-2040"},"PeriodicalIF":2.9,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11141086/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41162897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2024-08-01Epub Date: 2023-08-26DOI: 10.1177/00187208231197347
Tobias Rieger, Luisa Kugler, Dietrich Manzey, Eileen Roesler
{"title":"The (Im)perfect Automation Schema: Who Is Trusted More, Automated or Human Decision Support?","authors":"Tobias Rieger, Luisa Kugler, Dietrich Manzey, Eileen Roesler","doi":"10.1177/00187208231197347","DOIUrl":"10.1177/00187208231197347","url":null,"abstract":"<p><strong>Objective: </strong>This study's purpose was to better understand the dynamics of trust attitude and behavior in human-agent interaction.</p><p><strong>Background: </strong>Whereas past research provided evidence for a perfect automation schema, more recent research has provided contradictory evidence.</p><p><strong>Method: </strong>To disentangle these conflicting findings, we conducted an online experiment using a simulated medical X-ray task. We manipulated the framing of support agents (i.e., artificial intelligence (AI) versus expert versus novice) between-subjects and failure experience (i.e., perfect support, imperfect support, back-to-perfect support) within subjects. Trust attitude and behavior as well as perceived reliability served as dependent variables.</p><p><strong>Results: </strong>Trust attitude and perceived reliability were higher for the human expert than for the AI than for the human novice. Moreover, the results showed the typical pattern of trust formation, dissolution, and restoration for trust attitude and behavior as well as perceived reliability. Forgiveness after failure experience did not differ between agents.</p><p><strong>Conclusion: </strong>The results strongly imply the existence of an imperfect automation schema. This illustrates the need to consider agent expertise for human-agent interaction.</p><p><strong>Application: </strong>When replacing human experts with AI as support agents, the challenge of lower trust attitude towards the novel agent might arise.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"1995-2007"},"PeriodicalIF":2.9,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10131069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2024-08-01Epub Date: 2023-09-12DOI: 10.1177/00187208231200721
Daniel Sousa Schulman, Nishant Jalgaonkar, Sneha Ojha, Ana Rivero Valles, Monica L H Jones, Shorya Awtar
{"title":"A Visual-Vestibular Model to Predict Motion Sickness for Linear and Angular Motion.","authors":"Daniel Sousa Schulman, Nishant Jalgaonkar, Sneha Ojha, Ana Rivero Valles, Monica L H Jones, Shorya Awtar","doi":"10.1177/00187208231200721","DOIUrl":"10.1177/00187208231200721","url":null,"abstract":"<p><strong>Objective: </strong>This study proposed a model to predict passenger motion sickness under the presence of a visual-vestibular conflict and assessed its performance with respect to previously recorded experimental data.</p><p><strong>Background: </strong>While several models have been shown useful to predict motion sickness under repetitive motion, improvements are still desired in terms of predicting motion sickness in realistic driving conditions. There remains a need for a model that considers angular and linear visual-vestibular motion inputs in three dimensions to improve prediction of passenger motion sickness.</p><p><strong>Method: </strong>The model combined the subjective vertical conflict theory and human motion perception models. The proposed model integrates visual and vestibular sensed 6 DoF motion signals in a novel architecture.</p><p><strong>Results: </strong>Model prediction results were compared to motion sickness data obtained from studies conducted in motion simulators as well as on-road vehicle testing, yielding trends that are congruent with observed results in both cases.</p><p><strong>Conclusion: </strong>The model demonstrated the ability to predict trends in motion sickness response for conditions in which a passenger performs a task on a handheld device versus facing forward looking ahead under realistic driving conditions. However, further analysis across a larger population is necessary to better assess the model's performance.</p><p><strong>Application: </strong>The proposed model can be used as a tool to predict motion sickness under different levels of visual-vestibular conflict. This can be leveraged to design interventions capable of mitigating passenger motion sickness. Further, this model can provide insights that aid in the development of passenger experiences inside autonomous vehicles.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2120-2137"},"PeriodicalIF":2.9,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10571712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2024-08-01Epub Date: 2023-10-04DOI: 10.1177/00187208231204704
Satyajit Upasani, Divya Srinivasan, Qi Zhu, Jing Du, Alexander Leonessa
{"title":"Eye-Tracking in Physical Human-Robot Interaction: Mental Workload and Performance Prediction.","authors":"Satyajit Upasani, Divya Srinivasan, Qi Zhu, Jing Du, Alexander Leonessa","doi":"10.1177/00187208231204704","DOIUrl":"10.1177/00187208231204704","url":null,"abstract":"<p><strong>Background: </strong>In Physical Human-Robot Interaction (pHRI), the need to learn the robot's motor-control dynamics is associated with increased cognitive load. Eye-tracking metrics can help understand the dynamics of fluctuating mental workload over the course of learning.</p><p><strong>Objective: </strong>The aim of this study was to test eye-tracking measures' sensitivity and reliability to variations in task difficulty, as well as their performance-prediction capability, in physical human-robot collaboration tasks involving an industrial robot for object comanipulation.</p><p><strong>Methods: </strong>Participants (9M, 9F) learned to coperform a virtual pick-and-place task with a bimanual robot over multiple trials. Joint stiffness of the robot was manipulated to increase motor-coordination demands. The psychometric properties of eye-tracking measures and their ability to predict performance was investigated.</p><p><strong>Results: </strong>Stationary Gaze Entropy and pupil diameter were the most reliable and sensitive measures of workload associated with changes in task difficulty and learning. Increased task difficulty was more likely to result in a robot-monitoring strategy. Eye-tracking measures were able to predict the occurrence of success or failure in each trial with 70% sensitivity and 71% accuracy.</p><p><strong>Conclusion: </strong>The sensitivity and reliability of eye-tracking measures was acceptable, although values were lower than those observed in cognitive domains. Measures of gaze behaviors indicative of visual monitoring strategies were most sensitive to task difficulty manipulations, and should be explored further for the pHRI domain where motor-control and internal-model formation will likely be strong contributors to workload.</p><p><strong>Application: </strong>Future collaborative robots can adapt to human cognitive state and skill-level measured using eye-tracking measures of workload and visual attention.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2104-2119"},"PeriodicalIF":2.9,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41180495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2024-08-01Epub Date: 2023-11-09DOI: 10.1177/00187208231206324
Peter Le, Emily H L Mills, Charles A Weisenbach, Kermit G Davis
{"title":"Neck Muscle Coactivation Response to Varied Levels of Mental Workload During Simulated Flight Tasks.","authors":"Peter Le, Emily H L Mills, Charles A Weisenbach, Kermit G Davis","doi":"10.1177/00187208231206324","DOIUrl":"10.1177/00187208231206324","url":null,"abstract":"<p><strong>Objective: </strong>To evaluate neck muscle coactivation across different levels of mental workload during simulated flight tasks.</p><p><strong>Background: </strong>Neck pain (NP) is highly prevalent among military aviators. Given the complex nature within the flight environment, mental workload may be a risk factor for NP. This may induce higher levels of neck muscle coactivity, which over time may accelerate fatigue, increase neck discomfort, and affect flight task performance.</p><p><strong>Method: </strong>Three counterbalanced mental workload conditions represented by simulated flight tasks modulated by interstimulus frequency and complexity were investigated using the Modifiable Multitasking Environment (ModME). The primary measure was a neck coactivation index to describe the neuromuscular effort of the neck muscles as a system. Additional measures included perceived workload (NASA TLX), subjective discomfort, and task performance. Participants (<i>n</i> = 60; 30M, 30F) performed three test conditions over 1 hr each while seated in a simulated seating environment.</p><p><strong>Results: </strong>Neck coactivation indices (CoA) and subjective neck discomfort corresponded with increasing level of mental workload. Average CoAs for low, medium, and high workloads were: .0278(SD = .0232), .0286(SD = .0231), and .0295(SD = .0228), respectively. NASA TLX mental, temporal, effort, and overall scores also increased with the level of mental workload assigned. For ModME task performance, the overall performance score, monitoring accuracy, and resource management accuracy decreased while reaction times increased with the increasing level of mental workload. Communication accuracy was lowest with the low mental workload but had higher reaction times relative to increasing workload.</p><p><strong>Conclusion: </strong>Mental workload affects neck muscle coactivation during combinations of simulated flight tasks within a simulated helicopter seating environment.</p><p><strong>Application: </strong>The results of this study provide insights into the physical response to mental workload. With increasing multisensory modalities within the work environment, these insights may assist the consideration of physical effects from cognitive factors.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"2041-2056"},"PeriodicalIF":2.9,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71523529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}