利用眼头运动特征解码觅食搜索任务中的目标可分辨性和时间压力。

IF 3.1 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Anthony J Ries, Chloe Callahan-Flintoft, Anna Madison, Louis Dankovich, Jonathan Touryan
{"title":"利用眼头运动特征解码觅食搜索任务中的目标可分辨性和时间压力。","authors":"Anthony J Ries, Chloe Callahan-Flintoft, Anna Madison, Louis Dankovich, Jonathan Touryan","doi":"10.1186/s41235-025-00657-y","DOIUrl":null,"url":null,"abstract":"<p><p>In military operations, rapid and accurate decision-making is crucial, especially in visually complex and high-pressure environments. This study investigates how eye and head movement metrics can infer changes in search behavior during a naturalistic shooting scenario in virtual reality (VR). Thirty-one participants performed a foraging search task using a head-mounted display (HMD) with integrated eye tracking. Participants searched for targets among distractors under varying levels of target discriminability (easy vs. hard) and time pressure (low vs. high). As expected, behavioral results indicated that increased discrimination difficulty and greater time pressure negatively impacted performance, leading to slower response times and reduced d-prime. Support vector classifiers assigned a search condition, discriminability and time pressure, to each trial based on eye and head movement features. Combined eye and head features produced the most accurate classification model for capturing tasked-induced changes in search behavior, with the combined model outperforming those based on eye or head features alone. While eye features demonstrated strong predictive power, the inclusion of head features significantly enhanced model performance. Across the ensemble of eye metrics, fixation-related features were the most robust for classifying target discriminability, while saccadic-related features played a similar role for time pressure. In contrast, models constrained to head metrics emphasized global movement (amplitude, velocity) for classifying discriminability but shifted toward kinematic intensity (acceleration, jerk) in time pressure condition. Together these results speak to the complementary role of eye and head movements in understanding search behavior under changing task parameters.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"10 1","pages":"53"},"PeriodicalIF":3.1000,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12373606/pdf/","citationCount":"0","resultStr":"{\"title\":\"Decoding target discriminability and time pressure using eye and head movement features in a foraging search task.\",\"authors\":\"Anthony J Ries, Chloe Callahan-Flintoft, Anna Madison, Louis Dankovich, Jonathan Touryan\",\"doi\":\"10.1186/s41235-025-00657-y\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>In military operations, rapid and accurate decision-making is crucial, especially in visually complex and high-pressure environments. This study investigates how eye and head movement metrics can infer changes in search behavior during a naturalistic shooting scenario in virtual reality (VR). Thirty-one participants performed a foraging search task using a head-mounted display (HMD) with integrated eye tracking. Participants searched for targets among distractors under varying levels of target discriminability (easy vs. hard) and time pressure (low vs. high). As expected, behavioral results indicated that increased discrimination difficulty and greater time pressure negatively impacted performance, leading to slower response times and reduced d-prime. Support vector classifiers assigned a search condition, discriminability and time pressure, to each trial based on eye and head movement features. Combined eye and head features produced the most accurate classification model for capturing tasked-induced changes in search behavior, with the combined model outperforming those based on eye or head features alone. While eye features demonstrated strong predictive power, the inclusion of head features significantly enhanced model performance. Across the ensemble of eye metrics, fixation-related features were the most robust for classifying target discriminability, while saccadic-related features played a similar role for time pressure. In contrast, models constrained to head metrics emphasized global movement (amplitude, velocity) for classifying discriminability but shifted toward kinematic intensity (acceleration, jerk) in time pressure condition. Together these results speak to the complementary role of eye and head movements in understanding search behavior under changing task parameters.</p>\",\"PeriodicalId\":46827,\"journal\":{\"name\":\"Cognitive Research-Principles and Implications\",\"volume\":\"10 1\",\"pages\":\"53\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2025-08-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12373606/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Research-Principles and Implications\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1186/s41235-025-00657-y\",\"RegionNum\":2,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Research-Principles and Implications","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1186/s41235-025-00657-y","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

摘要

在军事行动中,快速和准确的决策是至关重要的,特别是在视觉复杂和高压环境中。本研究探讨了在虚拟现实(VR)的自然射击场景中,眼睛和头部运动指标如何推断搜索行为的变化。31名参与者使用带有眼动追踪功能的头戴式显示器(HMD)执行觅食搜索任务。被试在不同程度的目标可分辨性(容易与困难)和时间压力(低与高)下在干扰物中寻找目标。正如预期的那样,行为结果表明,增加的辨别难度和更大的时间压力会对表现产生负面影响,导致反应时间变慢和d-prime降低。支持向量分类器根据眼睛和头部运动特征为每个试验分配搜索条件、可判别性和时间压力。结合眼睛和头部特征产生了最准确的分类模型,用于捕获任务引起的搜索行为变化,结合模型优于单独基于眼睛或头部特征的模型。虽然眼睛特征显示出很强的预测能力,但头部特征的加入显著提高了模型的性能。在所有眼睛指标中,注视相关特征对目标可判别性的分类最为稳健,而眼跳相关特征对时间压力的分类也发挥了类似的作用。相比之下,约束于头部指标的模型强调全局运动(振幅、速度)来分类可判别性,但在时间压力条件下转向运动强度(加速度、猛然)。总之,这些结果说明了眼睛和头部运动在理解任务参数变化下的搜索行为中的互补作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Decoding target discriminability and time pressure using eye and head movement features in a foraging search task.

Decoding target discriminability and time pressure using eye and head movement features in a foraging search task.

Decoding target discriminability and time pressure using eye and head movement features in a foraging search task.

Decoding target discriminability and time pressure using eye and head movement features in a foraging search task.

In military operations, rapid and accurate decision-making is crucial, especially in visually complex and high-pressure environments. This study investigates how eye and head movement metrics can infer changes in search behavior during a naturalistic shooting scenario in virtual reality (VR). Thirty-one participants performed a foraging search task using a head-mounted display (HMD) with integrated eye tracking. Participants searched for targets among distractors under varying levels of target discriminability (easy vs. hard) and time pressure (low vs. high). As expected, behavioral results indicated that increased discrimination difficulty and greater time pressure negatively impacted performance, leading to slower response times and reduced d-prime. Support vector classifiers assigned a search condition, discriminability and time pressure, to each trial based on eye and head movement features. Combined eye and head features produced the most accurate classification model for capturing tasked-induced changes in search behavior, with the combined model outperforming those based on eye or head features alone. While eye features demonstrated strong predictive power, the inclusion of head features significantly enhanced model performance. Across the ensemble of eye metrics, fixation-related features were the most robust for classifying target discriminability, while saccadic-related features played a similar role for time pressure. In contrast, models constrained to head metrics emphasized global movement (amplitude, velocity) for classifying discriminability but shifted toward kinematic intensity (acceleration, jerk) in time pressure condition. Together these results speak to the complementary role of eye and head movements in understanding search behavior under changing task parameters.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.80
自引率
7.30%
发文量
96
审稿时长
25 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信