Ryusei Uramune, K. Sawamura, Sei Ikeda, H. Ishizuka, O. Oshiro
{"title":"车载AR显示器的凝视深度估计","authors":"Ryusei Uramune, K. Sawamura, Sei Ikeda, H. Ishizuka, O. Oshiro","doi":"10.1145/3582700.3583707","DOIUrl":null,"url":null,"abstract":"In our previous study, we proposed a method for judging whether the user is gazing at a semi-transparent virtual object or real objects behind it in augmented reality environments. This paper shows that the accuracy of our method can be improved by selecting the optimal thresholds for the fixation detection. Fourteen participants experienced a virtual reality environment in which there were a transparent subway map and buildings behind it in the distance of 2 m and 15 m away from each participant, respectively. As a result, the accuracy of our method has achieved 88.3 % and improved by 13.8 percentage points from the previous 74.5 %.","PeriodicalId":115371,"journal":{"name":"Proceedings of the Augmented Humans International Conference 2023","volume":"358 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Gaze Depth Estimation for In-vehicle AR Displays\",\"authors\":\"Ryusei Uramune, K. Sawamura, Sei Ikeda, H. Ishizuka, O. Oshiro\",\"doi\":\"10.1145/3582700.3583707\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In our previous study, we proposed a method for judging whether the user is gazing at a semi-transparent virtual object or real objects behind it in augmented reality environments. This paper shows that the accuracy of our method can be improved by selecting the optimal thresholds for the fixation detection. Fourteen participants experienced a virtual reality environment in which there were a transparent subway map and buildings behind it in the distance of 2 m and 15 m away from each participant, respectively. As a result, the accuracy of our method has achieved 88.3 % and improved by 13.8 percentage points from the previous 74.5 %.\",\"PeriodicalId\":115371,\"journal\":{\"name\":\"Proceedings of the Augmented Humans International Conference 2023\",\"volume\":\"358 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Augmented Humans International Conference 2023\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3582700.3583707\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Augmented Humans International Conference 2023","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3582700.3583707","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
In our previous study, we proposed a method for judging whether the user is gazing at a semi-transparent virtual object or real objects behind it in augmented reality environments. This paper shows that the accuracy of our method can be improved by selecting the optimal thresholds for the fixation detection. Fourteen participants experienced a virtual reality environment in which there were a transparent subway map and buildings behind it in the distance of 2 m and 15 m away from each participant, respectively. As a result, the accuracy of our method has achieved 88.3 % and improved by 13.8 percentage points from the previous 74.5 %.