{"title":"头眼:在虚拟现实中学习头眼协调和控制。","authors":"Yifang Pan, Ludwig Sidenmark, Karan Singh","doi":"10.1109/TVCG.2025.3589333","DOIUrl":null,"url":null,"abstract":"<p><p>Human head-eye coordination is a complex behavior, shaped by physiological constraints, psychological context, and gaze intent. Current context-specific gaze models in both psychology and graphics fail to produce plausible head-eye coordination for general patterns of human gaze behavior. In this paper, we: 1) propose and validate an experimental protocol to collect head-eye motion data during sequential look-at tasks in Virtual Reality; 2) identify factors influencing head-eye coordination using this data; and 3) introduce a head-eye coordinated Inverse Kinematic gaze model Head-EyeK that integrates these insights. Our evaluation of Head-EyeK is three-fold: we show the impact of algorithmic parameters on gaze behavior; we show a favorable comparison to prior art both quantitatively against ground-truth data, and qualitatively using a perceptual study; and we show multiple scenarios of complex gaze behavior credibly animated using Head-EyeK.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Head-EyeK: Head-eye Coordination and Control Learned in Virtual Reality.\",\"authors\":\"Yifang Pan, Ludwig Sidenmark, Karan Singh\",\"doi\":\"10.1109/TVCG.2025.3589333\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Human head-eye coordination is a complex behavior, shaped by physiological constraints, psychological context, and gaze intent. Current context-specific gaze models in both psychology and graphics fail to produce plausible head-eye coordination for general patterns of human gaze behavior. In this paper, we: 1) propose and validate an experimental protocol to collect head-eye motion data during sequential look-at tasks in Virtual Reality; 2) identify factors influencing head-eye coordination using this data; and 3) introduce a head-eye coordinated Inverse Kinematic gaze model Head-EyeK that integrates these insights. Our evaluation of Head-EyeK is three-fold: we show the impact of algorithmic parameters on gaze behavior; we show a favorable comparison to prior art both quantitatively against ground-truth data, and qualitatively using a perceptual study; and we show multiple scenarios of complex gaze behavior credibly animated using Head-EyeK.</p>\",\"PeriodicalId\":94035,\"journal\":{\"name\":\"IEEE transactions on visualization and computer graphics\",\"volume\":\"PP \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on visualization and computer graphics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TVCG.2025.3589333\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TVCG.2025.3589333","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Head-EyeK: Head-eye Coordination and Control Learned in Virtual Reality.
Human head-eye coordination is a complex behavior, shaped by physiological constraints, psychological context, and gaze intent. Current context-specific gaze models in both psychology and graphics fail to produce plausible head-eye coordination for general patterns of human gaze behavior. In this paper, we: 1) propose and validate an experimental protocol to collect head-eye motion data during sequential look-at tasks in Virtual Reality; 2) identify factors influencing head-eye coordination using this data; and 3) introduce a head-eye coordinated Inverse Kinematic gaze model Head-EyeK that integrates these insights. Our evaluation of Head-EyeK is three-fold: we show the impact of algorithmic parameters on gaze behavior; we show a favorable comparison to prior art both quantitatively against ground-truth data, and qualitatively using a perceptual study; and we show multiple scenarios of complex gaze behavior credibly animated using Head-EyeK.