Jason D. Saville, Randall D. Spain, J. Johnston, James Lester
{"title":"An Analysis of Squad Communication Behaviors during a Field-Training Exercise to Support Tactical Decision Making","authors":"Jason D. Saville, Randall D. Spain, J. Johnston, James Lester","doi":"10.54941/ahfe1001498","DOIUrl":"https://doi.org/10.54941/ahfe1001498","url":null,"abstract":"Understanding how teams function in dynamic environments is critical for advancing theories of team development. In this paper, we compared communication behaviors of high and low performing U.S. Army squads that completed a field training event designed to assess tactical decision-making skills and performance under stress. Transcribed audio logs of U.S. Army squad communications were analyzed. A series of 2 (performance group) by 2 (time: Pre-Contact and Post-Contact) mixed-model ANOVAs were conducted to determine whether team communication behaviors changed for squads after coming under duress from hostile contact. Significant main effects for time were found for several communication labels indicating communication patterns differed as task complexity and stressors increased. Significant interaction effects were found between time and performance group for the number of commands given by squad leaders and overall speech frequency. Results highlight the value of examining communications at a granular level as adaptive patterns may otherwise be overlooked.","PeriodicalId":102446,"journal":{"name":"Human Factors and Simulation","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127686780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kylie Fernandez, M. Tindall, B. Atkinson, D. Logsdon, Emily C. Anania
{"title":"Distinguishing Between Dynamic Altitude Breathing Threats to Improve Training","authors":"Kylie Fernandez, M. Tindall, B. Atkinson, D. Logsdon, Emily C. Anania","doi":"10.54941/ahfe1001496","DOIUrl":"https://doi.org/10.54941/ahfe1001496","url":null,"abstract":"Breathing related adverse physiological conditions are a prominent Warfighter pilot problem (Inspector General 2020). As a result of an investigation citing multiple types of adverse physiological conditions with various causes and symptoms (DoN 2017), there have been changes to training requirements to broaden the focus to include Dynamic Altitude Breathing Threat Training (DoN 2020). However, there remain questions about symptom definitions, distinctiveness, and response procedures that influence the content of this new training. In order to investigate the effects of different breathing conditions, the authors propose a between subjects design with adjustments to breathing conditions (i.e., restricted oxygen, restricted inhalation, restricted exhalation) using a mask on breathing device. Dependent measures include physiological data and pilot symptomology. The objective of this investigation is to inform awareness training for dynamic altitude breathing threats by validating instructional strategies and standard operating procedures for training implementation.Authors Note. The views of the author expressed herein do not necessarily represent those of the U.S. Navy or Department of Defense (DoD). Presentation of this material does not constitute or imply its endorsement, recommendation, or favoring by the DoD. NAWCTSD Public Release 22-ORL021 Distribution Statement A. Approved for public release; distribution is unlimited.","PeriodicalId":102446,"journal":{"name":"Human Factors and Simulation","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115469809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating the Effectiveness of Mixed Reality as a Cybersickness Mitigation Strategy in Helicopter Flight Simulation","authors":"Boris Englebert, Laurie Marsman, Jur Crijnen","doi":"10.54941/ahfe1003570","DOIUrl":"https://doi.org/10.54941/ahfe1003570","url":null,"abstract":"The advent of Virtual Reality (VR) in flight simulation promises to provide a cost-effective alternative for flight crew training compared to conventional flight simulation methods. However, it has been noted that the use of VR in flight simulation can lead to a greater incidence of cybersickness, which could jeopardize the effectiveness of flight training in VR. To optimally leverage the benefits that VR in flight simulation can bring, it is critical that this higher likelihood of experiencing cybersickness is countered. Even though a variety of theories for the causes for cybersickness in VR have been formulated, one of the most widely-accepted theories hinges on the principle that the sensory conflict between the visual sensory inputs from the virtual environment and the motion that is sensed by the vestibular system can result in cybersickness. Minimizing this sensory conflict can therefore be a strategy to mitigate cybersickness. The use of Mixed Reality (MR), in which the virtual environment is visually blended with the actual environment, could potentially be used for this strategy, based on the idea that it provides a visual reference of the actual environment that corresponds with the motion that is sensed, thereby reducing the sensory conflict and, correspondingly, cybersickness.The objective of this research is to investigate the effectiveness of MR, as an alternative for VR, for the mitigation of cybersickness in helicopter flight simulation. Since the idea of using MR as a cybersickness mitigation strategy is rooted in the idea of reducing the mismatch between visual and vestibular sensory inputs, the effectiveness of MR in combination with simulator motion is investigated as well. Arguably, MR could deteriorate immersion and reduce simulation fidelity, which may hamper the ability of the pilot to adequately fly in the virtual environment. Based on this premise, it is expected that a sweet spot exists where cybersickness is reduced, while fidelity remains sufficient to perform the flying task satisfactorily. In addition to evaluating the effectiveness for cybersickness mitigation, the impact of MR on pilot performance is also investigated.A human-in-the-loop experiment was performed that featured a total of four conditions, designed to assess the impact of both MR and motion on the cybersickness development and pilot performance. The experiment was performed in a simulated AgustaWestland AW139 helicopter on a Motion Systems’ PS-6TM-150 motion platform (6DoF), combined with a Varjo XR-3 visual device. In the experiment, Royal Netherlands Air Force helicopter pilots (n=4) were instructed to fly a series of maneuvers from the ADS-33 helicopter handling qualities guidelines. The Pirouette task in ADS-33 is the main focus for the results analysis because it is expected that near ground dynamic maneuvering affects cybersickness more severely compared to more stable and high altitude performed tasks.The cybersickness is evaluated by means o","PeriodicalId":102446,"journal":{"name":"Human Factors and Simulation","volume":"os-10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127973580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can Machine Learning be a Good Teammate?","authors":"L. Blaha, Megan B. Morris","doi":"10.54941/ahfe1003578","DOIUrl":"https://doi.org/10.54941/ahfe1003578","url":null,"abstract":"We hypothesize that successful human-machine learning teaming requires machine learning to be a good teammate. However, little is understood about what the important design factors are for creating technology that people perceive to be good teammates. In a recent survey study, data from over 1,100 users of commercially available smart technology rated characteristics of teammates. Results indicate that across several categories of technology, a good teammate must (1) be reliable, competent and communicative, (2) build human-like relationships with the user, (3) perform their own tasks, pick up the slack, and help when someone is overloaded, (4) learn to aid and support a user’s cognitive abilities, (5) offer polite explanations and be transparent in their behaviors, (6) have common, helpful goals, and (7) act in a predictable manner. Interestingly, but not surprisingly, the degree of importance given to these various characteristics varies by several individual differences in the participants, including their agreeableness, propensity to trust technology, and tendency to be an early technology adopter. In this paper, we explore the implications of these good teammate characteristics and individual differences in the design of machine learning algorithms and their user interfaces. Machine learners, particularly if coupled with interactive learning or adaptive interface design, may be able to tailor themselves or their interactions to align with what individual users perceive to be important characteristics. This has the potential to promote more reliance and common ground. While this sounds promising, it may also risk overreliance or misunderstanding between a system’s actual capabilities and the user’s perceived capabilities. We begin to lay out the possible design space considerations for building good machine learning teammates.","PeriodicalId":102446,"journal":{"name":"Human Factors and Simulation","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132208514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maartje Hidalgo, S. Rebensky, Daniel Nguyen, Myke C. Cohen, Lauren Temple, Brent D. Fegley
{"title":"Measurement and Manipulation in Human-Agent Teams: A Review","authors":"Maartje Hidalgo, S. Rebensky, Daniel Nguyen, Myke C. Cohen, Lauren Temple, Brent D. Fegley","doi":"10.54941/ahfe1003559","DOIUrl":"https://doi.org/10.54941/ahfe1003559","url":null,"abstract":"In this era of the Fourth Industrial Revolution, increasingly autonomous and intelligent artificial agents become more integrated into our daily lives. As such, these agents are capable of conducting independent tasks within a teaming setting, while also becoming more socially invested in the team space. While ample human-teaming theories help understand, explain, and predict the outcome of team endeavors, such theories are not yet existent for human-agent teaming. Furthermore, the development and evaluations of agents are constantly evolving. As a result, many developers utilize their own test plans and their own measures making it difficult to compare findings across agent developers. Many agent developers looking to capture human-team behaviors may not sufficiently understand the benefits of specific team processes and the challenges of measuring these constructs. Ineffective team scenarios and measures could lead to unrepresentative training datasets, prolonged agent development timelines, and less effective agent predictions. With the appropriate measures and conditions, an agent would be able to determine deficits in team processes early enough to intervene during performance. This paper is a step in the direction toward the formulation of a theory of human-agent teaming, wherein we conducted a literature review of team processes that are measurable in order to predict team performance and outcomes. The frameworks presented leverage multiple teaming frameworks such as Marks et al.’s (2001) team process model, the IMOI model (Ilgen, 20005), Salas et al.’s big five model (2005) as well as more modern frameworks on human agent teaming such as Carter-Browne et al. (2021). Specific constructs and measures within the “input” and “process” stages of these models were pulled and then searched within the team’s literature to find specific measurements of team processes. However, the measures are only half of the requirement for an effective team-testing scenario. Teams that are given unlimited amount of time should all complete a task, but only the most effective coordinative and communicative teams can do so in a time efficient manner. As a result, we also identified experimental manipulations that have shown to cause effects in team processes. This paper aims to present the measurement and manipulation frameworks developed under a DARPA effort along with the benefits and costs associated with each measurement and manipulation category.","PeriodicalId":102446,"journal":{"name":"Human Factors and Simulation","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134109711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of a Basic Principle SMR Simulator for Experimental Human Performance Research Studies","authors":"Claire Blackett","doi":"10.54941/ahfe1003564","DOIUrl":"https://doi.org/10.54941/ahfe1003564","url":null,"abstract":"Simulator studies are important to understanding and collecting data on human performance, especially for first-of-a-kind technologies such as Small Modular Reactors (SMR) and/or in cases where the role of the human operator is expected to change, such as in multi-unit operations. But not all simulators are the same, and the level of complexity and fidelity of the simulator can significantly affect the possibilities for data collection. As a researcher, how can you evaluate whether the simulator you are using is suitable for the studies that you wish to run? In 2020, the researchers of the Halden Reactor Project (HRP) activity on operation of multiple small modular reactors had a unique opportunity to explore this question. A multi-unit basic principle integral pressurised water reactor (iPWR) simulator was installed in the FutureLab facility in Halden, Norway in early 2019, and a first, small study was conducted in late 2019 to test the simulator environment and study design. The simulator was provided to the HRP free of charge by the International Atomic Energy Agency (IAEA). It is important to note that the simulator was not designed for performance of research studies, but rather as an education tool to demonstrate the basic principles and concepts of SMR operation, and as such is limited in scope by design. The goal for the small study in 2019 was to determine whether the basic principle simulator could enable investigation of predefined topics relevant to SMR operations research, such as monitoring strategies, prioritization of taskwork and staffing requirements in multi-unit environments. The study involved two experienced former control room operators, who were tested individually in a series of scenarios of increasing complexity in a multi-unit control room setup, observed by experienced experimental researchers. Further studies with licensed control room operators were planned for 2020, but due to the COVID-19 pandemic these plans had to be postponed. Instead, the research team took the opportunity to reflect on the experience of the 2019 small study, to perform a more detailed analysis of the study results and to substantiate the feasibility of the test environment for future experimental data collection. The detailed analysis was performed in a workshop format with the research project team using a set of questions related to specific aspects of the iPWR 3-unit control room setup. The team used a “traffic light” rating system to evaluate each question, where red indicated that that the test environment item under discussion is not currently feasible and would require redesign and extensive changes in order to use it; yellow indicated that the item is not currently feasible and would require moderate changes, and green indicated that the item is currently feasible, and some minor changes could be required. This paper describes the evaluation process in more detail, including the criteria for assessing the usefulness of the basic princi","PeriodicalId":102446,"journal":{"name":"Human Factors and Simulation","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122580125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Proposed Methodology to Assess Cognitive Overload using an Augmented Situation Awareness System","authors":"D. Patton, J. Rubinstein","doi":"10.54941/ahfe1003567","DOIUrl":"https://doi.org/10.54941/ahfe1003567","url":null,"abstract":"The US Army is tasked with providing the best tools to keep military personnel at peak performance. These tools can be found in many forms: small arms, protective clothing, armored vehicles, and communication devices, etc. However, understanding when a person is cognitively overloaded does not have such a tool. Cognitive overload is nothing new, yet it is not well understood. This paper discusses cognitive overload, why it is critical to military performance, past efforts, and focuses on a methodology to assess cognitive overload using a deployed augmented situation awareness (SA) system. We will employ a currently used SA system to assess cognitive overload through an additive process designed to identify when overload occurs and performance drops. Understanding when cognitive overload occurs is critical to Soldier survivability and offsetting it before it becomes a detriment is key. We will discuss our methodology assessing when cognitive overload occurs and potential mitigation strategies.","PeriodicalId":102446,"journal":{"name":"Human Factors and Simulation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129373892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}