{"title":"Semantic analysis of mobile eyetracking data","authors":"J. Pelz","doi":"10.1145/2029956.2029958","DOIUrl":null,"url":null,"abstract":"Researchers using laboratory-based eyetracking systems now have access to sophisticated data-analysis tools to reduce raw gaze data, but the huge data sets coming from wearable eyetrackers cannot be analyzed with the same tools. The lack of constraints that make mobile systems such powerful tools prevent the analysis tools designed for static or tracked observers from working with freely moving observers.\n Proposed solutions have included infrared markers hidden in the scene to provide reference points, Simultaneous Localization and Mapping (SLAM), and multi-view geometry techniques that build models from multiple views of a scene. These methods map fixations onto predefined or extracted 3D scene models, allowing traditional static-scene analysis tools to be used.\n Another approach to analysis of mobile eyetracking data is to code fixations with semantically meaningful labels rather than mapping the fixations to fixed 3D locations. This offers two important advantages over the model-based methods; semantic mapping allows coding of dynamic scenes without the need to explicitly track objects, and it provides an inherently flexible and extensible object-based coding scheme.","PeriodicalId":405392,"journal":{"name":"PETMEI '11","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PETMEI '11","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2029956.2029958","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Researchers using laboratory-based eyetracking systems now have access to sophisticated data-analysis tools to reduce raw gaze data, but the huge data sets coming from wearable eyetrackers cannot be analyzed with the same tools. The lack of constraints that make mobile systems such powerful tools prevent the analysis tools designed for static or tracked observers from working with freely moving observers.
Proposed solutions have included infrared markers hidden in the scene to provide reference points, Simultaneous Localization and Mapping (SLAM), and multi-view geometry techniques that build models from multiple views of a scene. These methods map fixations onto predefined or extracted 3D scene models, allowing traditional static-scene analysis tools to be used.
Another approach to analysis of mobile eyetracking data is to code fixations with semantically meaningful labels rather than mapping the fixations to fixed 3D locations. This offers two important advantages over the model-based methods; semantic mapping allows coding of dynamic scenes without the need to explicitly track objects, and it provides an inherently flexible and extensible object-based coding scheme.