Human FactorsPub Date : 2025-07-09DOI: 10.1177/00187208251355828
Christopher Draheim, Nathan Herdener, Ericka Rovira, S R Melick, Richard Pak, Joseph T Coyne, Ciara Sibley
{"title":"Investigating Transfer of Input Device Practice on Psychomotor Performance in an Aviation Selection Test.","authors":"Christopher Draheim, Nathan Herdener, Ericka Rovira, S R Melick, Richard Pak, Joseph T Coyne, Ciara Sibley","doi":"10.1177/00187208251355828","DOIUrl":"https://doi.org/10.1177/00187208251355828","url":null,"abstract":"<p><p>ObjectiveWe explored transfer of learning from brief practice with different input devices in the Navy's Performance Based Measures Battery (PBM), a psychomotor subset of the Aviation Selection Test Battery (ASTB).BackgroundThe PBM is a set of computerized tests used as a part of the ASTB to select aviators in the U.S. military. Official practice is not available, leading candidates to practice with unofficial re-creations and with or without access to the stick and throttle used on the PBM.MethodOur between-subjects study with 152 cadets from the U.S. Military Academy evaluated the impact of mouse/keyboard or stick/throttle practice on the psychomotor portions of the PBM compared to a control group that was only presented with an informational video.ResultsThe results showed that practice with either input device resulted in improved performance relative to control on the PBM's two-dimensional airplane tracking task (ATT). For the simpler vertical tracking task (VTT), the mouse/keyboard group showed significantly worse performance than either stick/throttle practice or control groups, indicating a transfer cost from practicing with an alternative input device.ConclusionThe results suggest that becoming familiar with the unique dynamics of the ATT may be more important than practicing with the appropriate input device. Conversely, device-specific motor learning appears to be a more impactful determinant of performance for the simpler VTT. This indicates that transfer effects from alternative input devices depend in part on properties of the task.ApplicationThis research can inform practice policies for psychomotor test selection.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"187208251355828"},"PeriodicalIF":2.9,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144593012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2025-07-01Epub Date: 2024-12-03DOI: 10.1177/00187208241305591
Hélio Silva, Pedro G F Ramos, Sabrina C Teno, Pedro B Júdice
{"title":"The Impact of Sit-Stand Desks on Full-Day and Work-Based Sedentary Behavior of Office Workers: A Systematic Review.","authors":"Hélio Silva, Pedro G F Ramos, Sabrina C Teno, Pedro B Júdice","doi":"10.1177/00187208241305591","DOIUrl":"10.1177/00187208241305591","url":null,"abstract":"<p><p>ObjectiveTo gather the existing evidence on the impact of sit-stand desk-based interventions on working-time and full-day sedentary behavior and compare their impact across different intervention lengths.BackgroundReducing sedentary behavior is vital for improving office workers' health. Sit-stand desks promote sitting and standing alternation, but understanding their effects outside the workplace is essential for success.MethodsStudies published between January 2008 and January 2024 were searched through electronic databases (PubMed, Google Scholar, and Cochrane Library). The quality of the studies was assessed using the Quality Assessment Tool for Quantitative Studies of the Effective Public Health Practice Project.ResultsTwelve included studies showed that the intervention group experienced average reductions in full-day sedentary behavior of 68.7 min/day at 3 months, 77.7 min/day at 6 months, and 62.1 min/day at 12 months compared to the control group. For working hours sedentary behavior, reductions were observed in the intervention group at 9 weeks (73.0 min/day), 3 months (88.0 min/day), 6 months (80.8 min/day), and 12 months (48.0 min/day) relative to the control group.ConclusionsSit-stand desk interventions can be effective in helping office workers reduce sedentary behavior in the short, medium, and long-term both at work and throughout the full-day.ApplicationActive workstation interventions, including sit-stand desks, educational sessions, and alert software, aim to reduce sedentary behavior among office workers. While sit-stand desks show promise in decreasing sitting time during working hours, their long-term effectiveness and impact beyond the workplace remain uncertain. This review evaluates their effectiveness across different durations, addressing both workplace and full-day impact.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"695-713"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2025-07-01Epub Date: 2024-12-26DOI: 10.1177/00187208241309748
SeHee Jung, Bingyi Su, Lu Lu, Liwei Qing, Xu Xu
{"title":"Video-Based Lifting Action Recognition Using Rank-Altered Kinematic Feature Pairs.","authors":"SeHee Jung, Bingyi Su, Lu Lu, Liwei Qing, Xu Xu","doi":"10.1177/00187208241309748","DOIUrl":"10.1177/00187208241309748","url":null,"abstract":"<p><p>ObjectiveTo identify lifting actions and count the number of lifts performed in videos based on robust class prediction and a streamlined process for reliable real-time monitoring of lifting tasks.BackgroundTraditional methods for recognizing lifting actions often rely on deep learning classifiers applied to human motion data collected from wearable sensors. Despite their high performance, these methods can be difficult to implement on systems with limited hardware resources.MethodThe proposed method follows a five-stage process: (1) BlazePose, a real-time pose estimation model, detects key joints of the human body. (2) These joints are preprocessed by smoothing, centering, and scaling techniques. (3) Kinematic features are extracted from the preprocessed joints. (4) Video frames are classified as lifting or nonlifting using rank-altered kinematic feature pairs. (5) A lifting counting algorithm counts the number of lifts based on the class predictions.ResultsNine rank-altered kinematic feature pairs are identified as key pairs. These pairs were used to construct an ensemble classifier, which achieved 0.89 or above in classification metrics, including accuracy, precision, recall, and F1 score. This classifier showed an accuracy of 0.90 in lifting counting and a latency of 0.06 ms, which is at least 12.5 times faster than baseline classifiers.ConclusionThis study demonstrates that computer vision-based kinematic features could be adopted to effectively and efficiently recognize lifting actions.ApplicationThe proposed method could be deployed on various platforms, including mobile devices and embedded systems, to monitor lifting tasks in real-time for the proactive prevention of work-related low-back injuries.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"656-672"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2025-07-01Epub Date: 2024-12-10DOI: 10.1177/00187208241306966
Yanlu Cao, Maosong Jiang, Zhuxi Yao, Shufeng Xia, Wenlong Liu
{"title":"Exploring Eye Movement Features of Motion Sickness Using Closed-Track Driving Experiments.","authors":"Yanlu Cao, Maosong Jiang, Zhuxi Yao, Shufeng Xia, Wenlong Liu","doi":"10.1177/00187208241306966","DOIUrl":"10.1177/00187208241306966","url":null,"abstract":"<p><p>ObjectiveTo explore and validate effective eye movement features related to motion sickness (MS) through closed-track experiments and to provide valuable insights for practical applications.BackgroundWith the development of autonomous vehicles (AVs), MS has attracted more and more attention. Eye movements have great potential to evaluate the severity of MS as an objective quantitative indicator of vestibular function. Eye movement signals can be easily and noninvasively collected using a camera, which will not cause discomfort or disturbance to passengers, thus making it highly applicable.MethodEye movement data were collected from 72 participants susceptible to MS in closed-track driving environments. We extracted features including blink rate (BR), total number of fixations (TNF), total duration of fixations (TDF), mean duration of fixations (MDF), saccade amplitude (SA), saccade duration (SD), and number of nystagmus (NN). The statistical method and multivariate long short-term memory fully convolutional network (MLSTM-FCN) were used to validate the effectiveness of eye movement features.ResultsSignificant differences were shown in the extracted eye movement features across different levels of MS through statistical analysis. The MLSTM-FCN model achieved an accuracy of 91.37% for MS detection and 88.51% for prediction in binary classification. For ternary classification, it achieved an accuracy of 80.54% for MS detection and 80.11% for prediction.ConclusionEvaluating MS through eye movements is effective. The MLSTM-FCN model based on eye movements can efficiently detect and predict MS.ApplicationThis work can be used to provide a possible indication and early warning for MS.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"714-730"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142830919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Biodynamic Modeling and Analysis of Human-Exoskeleton Interactions in Simulated Patient Handling Tasks.","authors":"Yinong Chen, Wei Yin, Liying Zheng, Ranjana Mehta, Xudong Zhang","doi":"10.1177/00187208241311271","DOIUrl":"10.1177/00187208241311271","url":null,"abstract":"<p><p>ObjectiveTo investigate the biodynamics of human-exoskeleton interactions during patient handling tasks using a subject-specific modeling approach.BackgroundExoskeleton technology holds promise for mitigating musculoskeletal disorders caused by manual handling and most alarmingly by patient handling jobs. A deeper, more unified understanding of the biomechanical effects of exoskeleton use calls for advanced subject-specific models of complex, dynamic human-exoskeleton interactions.MethodsTwelve sex-balanced healthy participants performed three simulated patient handling tasks along with a reference load-lifting task, with and without wearing the exoskeleton, while their full-body motion and ground reaction forces were measured. Subject-specific models were constructed using motion and force data. Biodynamic response variables derived from the models were analyzed to examine the effects of the exoskeleton. Model validation used load-lifting trials with known hand forces.ResultsThe use of exoskeleton significantly reduced (19.7%-27.2%) the peak lumbar flexion moment but increased (26.4%-47.8%) the peak lumbar flexion motion, with greater moment percent reduction in more symmetric handling tasks; similarly affected the shoulder joint moments and motions but only during two more symmetric handling tasks; and significantly reduced the peak motions for the rest of the body joints.ConclusionSubject-specific biodynamic models simulating exoskeleton-assisted patient handling were constructed and validated, demonstrating that the exoskeleton effectively lessened the peak loading to the lumbar and shoulder joints as prime movers while redistributing more motions to these joints and less to the remaining joints.ApplicationThe findings offer new insights into biodynamic responses during exoskeleton-assisted patient handling, benefiting the development of more effective, possibly task- and individual-customized, exoskeletons.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"641-655"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12127603/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142928181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2025-07-01Epub Date: 2024-11-28DOI: 10.1177/00187208241302475
Shiyu Deng, Chaitanya Kulkarni, Jinwoo Oh, Sarah Henrickson Parker, Nathan Lau
{"title":"Comparison Between Scene-Independent and Scene-Dependent Eye Metrics in Assessing Psychomotor Skills.","authors":"Shiyu Deng, Chaitanya Kulkarni, Jinwoo Oh, Sarah Henrickson Parker, Nathan Lau","doi":"10.1177/00187208241302475","DOIUrl":"10.1177/00187208241302475","url":null,"abstract":"<p><p>ObjectiveThis study aims to compare the relative sensitivity between scene-independent and scene-dependent eye metrics in assessing trainees' performance in simulated psychomotor tasks.BackgroundEye metrics have been extensively studied for skill assessment and training in psychomotor tasks, including aviation, driving, and surgery. These metrics can be categorized as scene-independent or scene-dependent, based on whether predefined areas of interest are considered. There is a paucity of direct comparisons between these metric types, particularly in their ability to assess performance during early training.MethodThirteen medical students practiced the peg transfer task in the Fundamentals of Laparoscopic Surgery. Scene-independent and scene-dependent eye metrics, completion time, and tool motion metrics were derived from eye-tracking data and task videos. K-means clustering of nine eye metrics identified three groups of practice trials with similar gaze behaviors, corresponding to three performance levels verified by completion time and tool motion metrics. A random forest model using eye metrics estimated classification accuracy and determined the feature importance of the eye metrics.ResultsScene-dependent eye metrics demonstrated a clearer linear trend with performance levels than scene-independent metrics. The random forest model achieved 88.59% accuracy, identifying the top four predictors of performance as scene-dependent metrics, whereas the two least effective predictors were scene-independent metrics.ConclusionScene-dependent eye metrics are overall more sensitive than scene-independent ones for assessing trainee performance in simulated psychomotor tasks.ApplicationThe study's findings are significant for advancing eye metrics in psychomotor skill assessment and training, enhancing operator competency, and promoting safe operations.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"673-694"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142752541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2025-07-01Epub Date: 2025-01-22DOI: 10.1177/00187208251314248
Xiaolu Bai, Jing Feng
{"title":"Awakening the Disengaged: Can Driving-Related Prompts Engage Drivers in Partial Automation?","authors":"Xiaolu Bai, Jing Feng","doi":"10.1177/00187208251314248","DOIUrl":"10.1177/00187208251314248","url":null,"abstract":"<p><p>ObjectiveThis study explores the effectiveness of conversational prompts on enhancing driver monitoring behavior and takeover performance in partially automated driving under two non-driving-related task (NDRT) scenarios with varying workloads.BackgroundDriver disengagement in partially automated driving is a serious safety concern. Intermittent conversational prompts that require responses may be a solution. However, existing literature is limited with inconsistent findings. There is little consideration of NDRTs as an important context, despite their ubiquitous involvement. A method is also lacking to measure driver engagement at the cognitive level, beyond manual and visual engagements.MethodsParticipants operated a partially automated vehicle in a simulator across six predefined drives. In each drive, participants either received driving-related prompts, daily-conversation prompts, or no prompts, with or without a takeover notification. The first experiment instructed participants to engage in NDRTs at their choice and the second experiment incentivized solving demanding anagrams with monetary rewards.ResultsWhen participants were voluntarily engaged in NDRTs, answering driving-related prompts and receiving takeover notifications improved their monitoring behavior and takeover performance. However, when participants were involved in the more demanding and incentivized NDRT, answering prompts had little effect.ConclusionThe study supports the importance of both maintaining appropriate workload and processing driving-related information during partially automated driving. Driving-related prompts improve driver engagement and takeover performance, but they are not robust enough to compete with NDRTs that have high motivational appeals and cognitive demands.ApplicationThe design of driver engagement tools should consider the workload and information processing mechanisms.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"731-752"},"PeriodicalIF":2.9,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143025980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2025-06-15DOI: 10.1177/00187208251349269
Monica Tatasciore, Laura Bennett, Vanessa K Bowden, Jason Bell, Troy A W Visser, Ken McAnally, Jason S McCarley, Matthew B Thompson, Christopher Shanahan, Robert Morris, Shayne Loft
{"title":"Adaptable Automation Transparency: Should Humans Be Provided Flexibility to Self-Select Transparency Information?","authors":"Monica Tatasciore, Laura Bennett, Vanessa K Bowden, Jason Bell, Troy A W Visser, Ken McAnally, Jason S McCarley, Matthew B Thompson, Christopher Shanahan, Robert Morris, Shayne Loft","doi":"10.1177/00187208251349269","DOIUrl":"https://doi.org/10.1177/00187208251349269","url":null,"abstract":"<p><p>ObjectiveWe examined whether allowing operators to self-select automation transparency level (adaptable transparency) could improve accuracy of automation use compared to nonadaptable (fixed) low and high transparency. We examined factors underlying higher transparency selection (decision risk, perceived difficulty).BackgroundIncreased fixed transparency typically improves automation use accuracy but can increase bias toward agreeing with automated advice. Adaptable transparency may further improve automation use if it increases the perceived expected value of high transparency information.MethodsAcross two studies, participants completed an uninhabited vehicle (UV) management task where they selected the optimal UV to complete missions. Automation advised the optimal UV but was not always correct. Automation transparency (fixed low, fixed high, adaptable) and decision risk were manipulated within-subjects.ResultsWith adaptable transparency, participants selected higher transparency on 41% of missions and were more likely to select it for missions perceived as more difficult. Decision risk did not impact transparency selection. Increased fixed transparency (low to high) did not benefit automation use accuracy, but reduced decision times. Adaptable transparency did not improve automation use compared to fixed transparency.ConclusionWe found no evidence that adaptable transparency improved automation use. Despite a lack of fixed transparency effects in the current study, an aggregated analysis of our work to date using the UV management paradigm indicated that higher fixed transparency improves automation use accuracy, reduces decision time and perceived workload, and increases trust in automation.ApplicationThe current study contributes to the emerging evidence-base regarding optimal automation transparency design in the modern workplace.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"187208251349269"},"PeriodicalIF":2.9,"publicationDate":"2025-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2025-06-11DOI: 10.1177/00187208251348395
Jessica R Lee, Robert S Gutzwiller
{"title":"Do the Eyes Have It? A Review of Using Eye Tracking for Automation Trust Measurement.","authors":"Jessica R Lee, Robert S Gutzwiller","doi":"10.1177/00187208251348395","DOIUrl":"https://doi.org/10.1177/00187208251348395","url":null,"abstract":"<p><p>ObjectiveWe conducted a literature review investigating the validity of eye tracking metrics appropriately representing trust in automation.BackgroundAs researchers grow interested in measuring trust in automation, there has been a need to find a reliable and accurate measurement tool. Many articles have measured automation trust using eye tracking, assuming that as trust increases, visual attention from eye tracking metrics decreases. Eye tracking is an attractive potential measure for its nonintrusive and objective nature.MethodIn this systematic literature review, we looked at studies that have tested the relationship between eye tracking and trust to determine its validity and reliability.ResultsAcross 22 articles that investigated the relationship between trust and eye tracking, only about half found a negative significant relationship, whereas the other half found no relationship at all.ConclusionThe relationship between automation trust and eye tracking is inconsistent and unreliable. A wide variety of trust and eye tracking metrics were used, but only about half of the papers found any kind of relationship. The relationship did not appear robust enough to prevail when different eye tracking and trust metrics were applied in various study designs.ApplicationAn effective eye tracking-trust relationship would be useful in various applications (e.g., autonomous driving). Further, this relationship is crucial when there is a clear distinction between attention allocated to automated components of a system (e.g., car display) and unrelated displays to allow for an easy separation of a location associated with high trust versus low trust.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"187208251348395"},"PeriodicalIF":2.9,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144267969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human FactorsPub Date : 2025-06-09DOI: 10.1177/00187208251349599
Allison Lynch, Naila Ayala, Shi Cao, Ewa Niechwiej-Szwedo, Suzanne Kearns, Elizabeth Irving
{"title":"The Impact of Reduced Vision on Simulated Flight Performance in Novice Pilots: Toward Establishing Performance-Based and Operatically Representative Visual Acuity Standards.","authors":"Allison Lynch, Naila Ayala, Shi Cao, Ewa Niechwiej-Szwedo, Suzanne Kearns, Elizabeth Irving","doi":"10.1177/00187208251349599","DOIUrl":"https://doi.org/10.1177/00187208251349599","url":null,"abstract":"<p><p>ObjectiveTo investigate the effect of visual degradation on simulated flight performance, perceived stress, and perceived task difficulty.BackgroundEstablishing visual standards for pilots is crucial, although it may limit the pool of eligible candidates and impact pilot retention. Despite this, there is limited understanding regarding the influence of vision on pilot performance.MethodTwenty participants (0-300 flight hours) completed a flight simulation task using the ALSIM AL250 in two experiments. Distance static visual acuity (VA) ranged from 6/6 (20/20) to 6/60, with scenarios including no vision. Experiment 1 (<i>n</i> = 10) tested landing performance for 6 VA conditions, while experiment 2 (<i>n</i> = 10) involved a more difficult circuit task (traffic pattern) with 8 VA conditions. Participants completed stress and difficulty questionnaires between trials. Flight performance variables assessed were vertical speed, altitude, attitude, pitch, and roll.ResultsIn both flight simulation experiments, vision degradation did not affect novice pilots' landing performance, but complete loss of vision led to loss of control. Participants in experiment 1 experienced stress at lower perturbation level than in experiment 2.ConclusionVision degradation up to 6/60 had no discernible impact on novice pilots' simulated approach to landing or flight circuit and landing. Total vision loss led to loss of aircraft control. Perceived stress and difficulty increased with reduced vision.ApplicationThis research opens the door to reexamine the visual standards for pilots and serve as a simple tool to manipulate perceived stress and difficulty in operational tasks.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":" ","pages":"187208251349599"},"PeriodicalIF":2.9,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}