A. Chung, F. Deligianni, Xiao-Peng Hu, Guang-Zhong Yang
{"title":"Visual feature extraction via eye tracking for saliency driven 2D/3D registration","authors":"A. Chung, F. Deligianni, Xiao-Peng Hu, Guang-Zhong Yang","doi":"10.1145/968363.968371","DOIUrl":null,"url":null,"abstract":"This paper presents a new technique for extracting visual saliency from experimental eye tracking data. An eye-tracking system is employed to determine which features that a group of human observers considered to be salient when viewing a set of video images. With this information, a biologically inspired saliency map is derived by transforming each observed video image into a feature space representation. By using a feature normalisation process based on the relative abundance of visual features within the background image and those dwelled on eye tracking scan paths, features related to visual attention are determined. These features are then back projected to the image domain to determine spatial areas of interest for unseen video images. The strengths and weaknesses of the method are demonstrated with feature correspondence for 2D to 3D image registration of endoscopy videos with computed tomography data. The biologically derived saliency map is employed to provide an image similarity measure that forms the heart of the 2D/3D registration method. It is shown that by only processing selective regions of interest as determined by the saliency map, rendering overhead can be greatly reduced. Significant improvements in pose estimation efficiency can be achieved without apparent reduction in registration accuracy when compared to that of using a non-saliency based similarity measure.","PeriodicalId":127538,"journal":{"name":"Eye Tracking Research & Application","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Eye Tracking Research & Application","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/968363.968371","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
Abstract
This paper presents a new technique for extracting visual saliency from experimental eye tracking data. An eye-tracking system is employed to determine which features that a group of human observers considered to be salient when viewing a set of video images. With this information, a biologically inspired saliency map is derived by transforming each observed video image into a feature space representation. By using a feature normalisation process based on the relative abundance of visual features within the background image and those dwelled on eye tracking scan paths, features related to visual attention are determined. These features are then back projected to the image domain to determine spatial areas of interest for unseen video images. The strengths and weaknesses of the method are demonstrated with feature correspondence for 2D to 3D image registration of endoscopy videos with computed tomography data. The biologically derived saliency map is employed to provide an image similarity measure that forms the heart of the 2D/3D registration method. It is shown that by only processing selective regions of interest as determined by the saliency map, rendering overhead can be greatly reduced. Significant improvements in pose estimation efficiency can be achieved without apparent reduction in registration accuracy when compared to that of using a non-saliency based similarity measure.