Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization最新文献

筛选
英文 中文
Optic flow and physical effort as cues for the perception of the rate of self-produced motion in VE 视流和体力作为感知VE中自产运动速率的线索
Benjamin Chihak, H. Pick, J. Plumert, Christine J. Ziemer, Sabarish V. Babu, J. Cremer, J. Kearney
{"title":"Optic flow and physical effort as cues for the perception of the rate of self-produced motion in VE","authors":"Benjamin Chihak, H. Pick, J. Plumert, Christine J. Ziemer, Sabarish V. Babu, J. Cremer, J. Kearney","doi":"10.1145/1620993.1621026","DOIUrl":"https://doi.org/10.1145/1620993.1621026","url":null,"abstract":"Understanding how humans perceive their rate of translational locomotion through the world is important for designing virtual environments. People have access to two primary classes of cues that can provide information about their movement through the environment: Visual and auditory cues (e.g. optic flow, optical expansion, Doppler shift) and somatosensory cues (e.g. effort, proprioceptive feedback.) An important research question is the relative weighting of these cues for perceiving the rate of translational movement in a virtual environment.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"390 1","pages":"132"},"PeriodicalIF":0.0,"publicationDate":"2009-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77698079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using immersive virtual reality to evaluate pedestrian street crossing decisions at a roundabout 利用沉浸式虚拟现实技术评估环形交叉路口的行人过街决策
Haojie Wu, D. Ashmead, Bobby Bodenheimer
{"title":"Using immersive virtual reality to evaluate pedestrian street crossing decisions at a roundabout","authors":"Haojie Wu, D. Ashmead, Bobby Bodenheimer","doi":"10.1145/1620993.1621001","DOIUrl":"https://doi.org/10.1145/1620993.1621001","url":null,"abstract":"In this paper, we use an immersive virtual environment to assess the separation, or \"gap,\" between moving vehicles that people need before initiating a street crossing in a roundabout, where traffic can be approaching from several directions. From a pedestrians viewpoint, crossing at a roundabout can represent a more complex decision than at a normal linear intersection. This paper presents the design of a system that simulates reasonable traffic patterns that a pedestrian might encounter in making a crossing decision at the exit lane of a roundabout, while controlling the gap duration in the stream of traffic. Using a maximum-likelihood procedure, we conducted a street crossing experiment in the virtual environment to evaluate the minimum gap during which pedestrians would initiate a successful crossing of the intersection. Our results are generally consistent with real-world data on pedestrian street crossings, and may provide insights into how to engineer the design of such roundabouts.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"1 1","pages":"35-40"},"PeriodicalIF":0.0,"publicationDate":"2009-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87008201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
The effects of continued exposure to medium field augmented and virtual reality on the perception of egocentric depth 持续接触中场增强现实和虚拟现实对自我中心深度感知的影响
Adam Jones, J. Swan, Gurjot Singh, J. Franck, S. Ellis
{"title":"The effects of continued exposure to medium field augmented and virtual reality on the perception of egocentric depth","authors":"Adam Jones, J. Swan, Gurjot Singh, J. Franck, S. Ellis","doi":"10.1145/1620993.1621032","DOIUrl":"https://doi.org/10.1145/1620993.1621032","url":null,"abstract":"Research in the perception of depth in augmented and virtual reality has reported a consistent underestimation of egocentric depth with regard to stationary objects located along the ground plane. However, there has been a rather large disparity in the degree of underestimation reported from study to study with some studies reporting as much as a 68% underestimation of egocentric depth while others report as little as 6% [Jones et al. 2008]. The current study investigates the judgment of egocentric distance in real, augmented, and virtual environments and finds that subjects' judgments improve in accuracy as exposure continues in the absence of explicit feedback.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"52 1","pages":"138"},"PeriodicalIF":0.0,"publicationDate":"2009-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84486081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Evaluation of Glyph-based Multivariate Scalar Volume Visualization Techniques. 基于符号的多变量标量体可视化技术评价。
David Feng, Yueh Lee, Lester Kwock, Russell M Taylor
{"title":"Evaluation of Glyph-based Multivariate Scalar Volume Visualization Techniques.","authors":"David Feng,&nbsp;Yueh Lee,&nbsp;Lester Kwock,&nbsp;Russell M Taylor","doi":"10.1145/1620993.1621006","DOIUrl":"https://doi.org/10.1145/1620993.1621006","url":null,"abstract":"<p><p>We present a user study quantifying the effectiveness of Scaled Data-Driven Spheres (SDDS), a multivariate three-dimensional data set visualization technique. The user study compares SDDS, which uses separate sets of colored sphere glyphs to depict variable values, to superquadric glyphs, an alternative technique that maps all variable values to a single glyph. User study participants performed tasks designed to measure their ability to estimate values of particular variables and identify relationships among variables. Results from the study show that users were significantly more accurate and faster for both tasks under the SDDS condition.</p>","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"2009 ","pages":"61-68"},"PeriodicalIF":0.0,"publicationDate":"2009-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/1620993.1621006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29871622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Brightness of the glare illusion 亮度的眩光错觉
Akiko Yoshida, M. Ihrke, Rafał K. Mantiuk, H. Seidel
{"title":"Brightness of the glare illusion","authors":"Akiko Yoshida, M. Ihrke, Rafał K. Mantiuk, H. Seidel","doi":"10.1145/1394281.1394297","DOIUrl":"https://doi.org/10.1145/1394281.1394297","url":null,"abstract":"The glare illusion is commonly used in CG rendering, especially in game engines, to achieve a higher brightness than that of the maximum luminance of a display. In this work, we measure the perceived luminance of the glare illusion in a psychophysical experiment. To evoke the illusion, an image is convolved with either a point spread function (PSF) of the eye or a Gaussian kernel. It is found that 1) the Gaussian kernel evokes an illusion of the same or higher strength than that produced by the PSF while being computationally much less expensive, 2) the glare illusion can raise the perceived luminance by 20 -- 35%, 3) some convolution kernels can produce undesirable Mach-band effects and thereby reduce the brightness boost of the glare illusion. The reported results have practical implications for glare rendering in computer graphics.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"1 1","pages":"83-90"},"PeriodicalIF":0.0,"publicationDate":"2008-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77414996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Eye-tracking dynamic scenes with humans and animals 人与动物的眼球追踪动态场景
Ljiljana Skrba, Ian O'Connell, C. O'Sullivan
{"title":"Eye-tracking dynamic scenes with humans and animals","authors":"Ljiljana Skrba, Ian O'Connell, C. O'Sullivan","doi":"10.1145/1394281.1394325","DOIUrl":"https://doi.org/10.1145/1394281.1394325","url":null,"abstract":"In our research, we are interested in simulating realistic quadrupeds [2008]. Previous eye-tracking results have shown that faces are particularly salient for static images of animals and humans [2005; 2004]. To explore whether similar eye-movement patterns are found for dynamic scenes depicting animals, we displayed multiple 4-second (56 frame) grey-scale video clips of farm animals (goat, horse, sheep) walking and trotting. Using an EyelinkII eye-tracker, we recorded the eye-movements of 7 participants who were instructed to view the experiments with a view to subsequently answering questions about the movements. As it has been shown that human and animal motions activate different areas of the brain in children [2003], we also showed the participants the same number of videos showing humans walking and running. Figure 2 shows several frames of three of the video clips, with the eye-fixations of one participant overlaid. This depicts a very typical eye-movement pattern found in most of the videos, in that participants first looked at the head of the animal, then looked along the torso, finishing at the hips.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"12 1","pages":"199"},"PeriodicalIF":0.0,"publicationDate":"2008-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78174791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Effects of horizontal field-of-view restriction on manoeuvring performance through complex structured environments 水平视场限制对复杂结构环境机动性能的影响
Sander E. M. Jansen, A. Toet, N. Delleman
{"title":"Effects of horizontal field-of-view restriction on manoeuvring performance through complex structured environments","authors":"Sander E. M. Jansen, A. Toet, N. Delleman","doi":"10.1145/1394281.1394318","DOIUrl":"https://doi.org/10.1145/1394281.1394318","url":null,"abstract":"Field-of-view (FOV) restrictions are known to affect human behaviour and to degrade performance for a range of different tasks. A proposed cause for this performance impairment is the predominant activation of the ventral cortical stream as compared to the dorsal stream. This may compromise the ability to control heading as well as degrade the processing of spatial information [Patterson et al. 2006]. Furthermore, the peripheral visual field is important in maintaining postural equilibrium [Turano et al. 1993]. These are all significant factors when manoeuvring through complex structured environments. We discuss here two experiments investigating the influence of horizontal FOV-restriction on manoeuvring performance through real-world structured environments. The results can help determine requirements for the selection and development of FOV limiting devices such as Head-Mounted Displays (HMDs).","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"40 1","pages":"189"},"PeriodicalIF":0.0,"publicationDate":"2008-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80534269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Sensitivity to scene motion for phases of head yaws 头部偏航相位对场景运动的敏感性
J. Jerald, Tabitha C. Peck, Frank Steinicke, M. Whitton
{"title":"Sensitivity to scene motion for phases of head yaws","authors":"J. Jerald, Tabitha C. Peck, Frank Steinicke, M. Whitton","doi":"10.1145/1394281.1394310","DOIUrl":"https://doi.org/10.1145/1394281.1394310","url":null,"abstract":"In order to better understand how scene motion is perceived in immersive virtual environments and to provide guidelines for designing more useable systems, we measured sensitivity to scene motion for different phases of quasi-sinusoidal head yaw motions. We measured and compared scene-velocity thresholds for nine subjects across three conditions: visible <u>W</u>ith head rotation (W) where the scene is presented during the center part of sinusoidal head yaws and the scene moves in the same direction the head is rotating, visible <u>A</u>gainst head rotation (A) where the scene is presented during the center part of sinusoidal head yaws and the scene moves in the opposite direction the head is rotating, and visible at the <u>E</u>dge of head rotation (E) where the scene is presented at the extreme of sinusoidal head yaws and the scene moves during the time that head direction changes.\u0000 The W condition had a significantly higher threshold (decreased sensitivity) than both the E and A conditions. The median threshold for the W condition was 2.1 times the A condition and 1.5 times the E condition. We did not find a significant difference between the E and A conditions, although there was a trend for the A thresholds to be less than the E thresholds. An Equivalence Test showed the A and E thresholds to be statistically equivalent.\u0000 Our results suggest the phase of user's head yaw should be taken into account when inserting additional scene motion into immersive virtual environments if one does not want users to perceive that motion. In particular, there is much more latitude for artificially and imperceptibly rotating a scene, as in Razzaque's redirecting walking technique, in the same direction of head yaw than against the direction of yaw.\u0000 The implications for maximum end-to-end latency in a head-mounted display is that users are less likely to notice latency when beginning a head yaw (when the scene moves with the head) than when slowing down a head yaw (when the scene moves against the head) or when changing head direction (when the head is near still and scene motion due to latency is maximized).","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"21 1","pages":"155-162"},"PeriodicalIF":0.0,"publicationDate":"2008-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81299193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 66
Fixation-identification in dynamic scenes: comparing an automated algorithm to manual coding 动态场景中的注视识别:自动算法与手动编码的比较
S. Munn, Leanne Stefano, J. Pelz
{"title":"Fixation-identification in dynamic scenes: comparing an automated algorithm to manual coding","authors":"S. Munn, Leanne Stefano, J. Pelz","doi":"10.1145/1394281.1394287","DOIUrl":"https://doi.org/10.1145/1394281.1394287","url":null,"abstract":"Video-based eye trackers produce an output video showing where a subject is looking, the subject's point-of-regard (POR), for each frame of a video of the scene. Fixation-identification algorithms simplify the long list of POR data into a more manageable set of data, especially for further analysis, by grouping PORs into fixations. Most current fixation-identification algorithms assume that the POR data are defined in static two-dimensional scene images and only use these raw POR data to identify fixations. The applicability of these algorithms to gaze data in dynamic scene videos is largely unexplored. We implemented a simple velocity-based, duration-sensitive fixation-identification algorithm and compared its performance to results obtained by three experienced users manually coding the eye tracking data displayed within the scene video such that these manual coders had knowledge of the scene motion. We performed this comparison for eye tracking data collected during two different tasks involving different types of scene motion. These two tasks included a subject walking around a building for about 100 seconds (Task 1) and a seated subject viewing a computer animation (approximately 90 seconds long, Task 2). It took our manual coders on average 75 minutes (stdev = 28) and 80 minutes (17) to code results from the first and second tasks, respectively. The automatic fixation-identification algorithm, implemented in MATLAB and run on an Apple 2.16 GHz MacBook, produced results in 0.26 seconds for Task 1 and 0.21 seconds for Task 2. For the first task (walking), the average percent difference among the three human manual coders was 9% (3.5) and the average percent difference between the automatically generated results and the three coders was 11% (2.0). For the second task (animation), the average percent difference among the three human coders was 4% (0.75) and the average percent difference between the automatically generated results and the three coders was 5% (0.9).","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"5 1","pages":"33-42"},"PeriodicalIF":0.0,"publicationDate":"2008-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83446040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
How do bicyclists intercept moving gaps in a virtual environment? 骑自行车的人如何在虚拟环境中拦截移动的间隙?
Benjamin Chihak, Sabarish V. Babu, Timofey Grechkin, Christine J. Ziemer, J. Cremer, J. Kearney, J. Plumert
{"title":"How do bicyclists intercept moving gaps in a virtual environment?","authors":"Benjamin Chihak, Sabarish V. Babu, Timofey Grechkin, Christine J. Ziemer, J. Cremer, J. Kearney, J. Plumert","doi":"10.1145/1394281.1394317","DOIUrl":"https://doi.org/10.1145/1394281.1394317","url":null,"abstract":"Coordinating one's actions with the movements of other objects in the environments is important for both interception and avoidance tasks. Recent experiments show that performance in some interception tasks is well explained by a motion control strategy based on adjusting speed to maintain a constant bearing angle (CBA) between an individual's direction of motion and the object to be intercepted [Lenoir et al. 2002]. When the object and observer travel on intersecting, linear trajectories and the object travels with constant speed, then an observer employing the CBA strategy will move with constant speed.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"65 1","pages":"188"},"PeriodicalIF":0.0,"publicationDate":"2008-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76525448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信