Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization最新文献

筛选
英文 中文
The influence of avatar (self and character) animations on distance estimation, object interaction and locomotion in immersive virtual environments 沉浸式虚拟环境中化身(自我和角色)动画对距离估计、物体交互和运动的影响
Erin A. McManus, Bobby Bodenheimer, S. Streuber, S. Rosa, H. Bülthoff, B. Mohler
{"title":"The influence of avatar (self and character) animations on distance estimation, object interaction and locomotion in immersive virtual environments","authors":"Erin A. McManus, Bobby Bodenheimer, S. Streuber, S. Rosa, H. Bülthoff, B. Mohler","doi":"10.1145/2077451.2077458","DOIUrl":"https://doi.org/10.1145/2077451.2077458","url":null,"abstract":"Humans have been shown to perceive and perform actions differently in immersive virtual environments (VEs) as compared to the real world. Immersive VEs often lack the presence of virtual characters; users are rarely presented with a representation of their own body and have little to no experience with other human avatars/characters. However, virtual characters and avatars are more often being used in immersive VEs. In a two-phase experiment, we investigated the impact of seeing an animated character or a self-avatar in a head-mounted display VE on task performance. In particular, we examined performance on three different behavioral tasks in the VE. In a learning phase, participants either saw a character animation or an animation of a cone. In the task performance phase, we varied whether participants saw a co-located animated self-avatar. Participants performed a distance estimation, an object interaction and a stepping stone locomotion task within the VE. We find no impact of a character animation or a self-avatar on distance estimates. We find that both the animation and the self-avatar influenced task performance which involved interaction with elements in the environment; the object interaction and the stepping stone tasks. Overall the participants performed the tasks faster and more accurately when they either had a self-avatar or saw a character animation. The results suggest that including character animations or self-avatars before or during task execution is beneficial to performance on some common interaction tasks within the VE. Finally, we see that in all cases (even without seeing a character or self-avatar animation) participants learned to perform the tasks more quickly and/or more accurately over time.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"63 1","pages":"37-44"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91267277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 71
Perceptually-based compensation of light pollution in display systems 显示系统中基于感知的光污染补偿
J. Baar, Steven Poulakos, Wojciech Jarosz, D. Nowrouzezahrai, Rasmus Tamstorf, M. Gross
{"title":"Perceptually-based compensation of light pollution in display systems","authors":"J. Baar, Steven Poulakos, Wojciech Jarosz, D. Nowrouzezahrai, Rasmus Tamstorf, M. Gross","doi":"10.1145/2077451.2077460","DOIUrl":"https://doi.org/10.1145/2077451.2077460","url":null,"abstract":"This paper addresses the problem of unintended light contributions due to physical properties of display systems. An example of such unintended contribution is crosstalk in stereoscopic 3D display systems, often referred to as ghosting. Ghosting results in a reduction of visual quality, and may lead to an uncomfortable viewing experience. The latter is due to conflicting (depth) edge cues, which can hinder the human visual system (HVS) proper fusion of stereo images (stereopsis). We propose an automatic, perceptually-based computational compensation framework, which formulates pollution elimination as a minimization problem. Our method aims to distribute the error introduced by the pollution in a perceptually optimal manner. As a consequence ghost edges are smoothed locally, resulting in a more comfortable stereo viewing experience. We show how to make the computation tractable by exploiting the structure of the resulting problem, and also propose a perceptually-based pollution prediction. We show that our general framework is applicable to other light pollution problems, such as descattering.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"4 1","pages":"45-52"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80527690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Gaze guidance reduces the number of vehicle-pedestrian collisions in a driving simulator 在驾驶模拟器中,凝视引导减少了车辆与行人碰撞的次数
L. Pomârjanschi, M. Dorr, E. Barth
{"title":"Gaze guidance reduces the number of vehicle-pedestrian collisions in a driving simulator","authors":"L. Pomârjanschi, M. Dorr, E. Barth","doi":"10.1145/2077451.2077482","DOIUrl":"https://doi.org/10.1145/2077451.2077482","url":null,"abstract":"Driving and visual perception are tightly linked. Every moment, a multitude of visual stimuli compete for the driver's limited attentional resources. Despite modern safety measures, traffic accidents still remain a major source of fatalities. A large part of these casualties occur in accidents for which driver distraction was cited as the main cause [National Highway Traffic Safety Administration September 2010]. We propose to help drivers by building an augmented vision system that can guide eye movements towards regions which may constitute a source of danger. In a first study, we have already shown that largely unobtrusive gaze guidance techniques used in a driving simulator help drivers better distribute their attentional resources and drive more safely [Pomarjanschi et al. 2011]. Current experiments investigate the efficiency of more general cues, that only signal the direction in which a critical event might occur. Results of these experiments will be reported at the conference.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"71 Suppl 1 1","pages":"119"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78419842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaze-contingent real-time video processing to study natural vision 基于注视的实时视频处理研究自然视觉
M. Dorr, P. Bex
{"title":"Gaze-contingent real-time video processing to study natural vision","authors":"M. Dorr, P. Bex","doi":"10.1145/2077451.2077476","DOIUrl":"https://doi.org/10.1145/2077451.2077476","url":null,"abstract":"Most of our knowledge about visual performance has been obtained with simple, synthetic stimuli, such as narrowband gratings presented on homogeneous backgrounds, and under steady fixation. The visual input we encounter in the real world, however, is fundamentally different and comprises a very broad distribution of spatio-temporal frequencies, orientations, colours, and contrasts, and eye movements induce strong temporal transients on the retina several times per second.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"42 1","pages":"113"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76481723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaze-contingent enhancements for a visual search and rescue task 视觉搜索和救援任务的视觉增强
James Mardell, M. Witkowski, R. Spence
{"title":"Gaze-contingent enhancements for a visual search and rescue task","authors":"James Mardell, M. Witkowski, R. Spence","doi":"10.1145/2077451.2077472","DOIUrl":"https://doi.org/10.1145/2077451.2077472","url":null,"abstract":"An important task in many fields is the human visual inspection of an image. Those fields include quality control, medical diagnosis, surveillance and Wilderness Search and Rescue (WiSAR) [Goodrich et al. 2008]. The latter activity, triggered by an individual becoming lost, is the context within which this work proposes and evaluates a new approach to the task of human visual inspection.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"1 1","pages":"109"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80639367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Perceiving human motion variety 感知人类动作的变化
M. Prazák, C. O'Sullivan
{"title":"Perceiving human motion variety","authors":"M. Prazák, C. O'Sullivan","doi":"10.1145/2077451.2077468","DOIUrl":"https://doi.org/10.1145/2077451.2077468","url":null,"abstract":"In order to simulate plausible groups or crowds of virtual characters, it is important to ensure that the individuals in a crowd do not look, move, behave or sound identical to each other. Such obvious 'cloning' can be disconcerting and reduce the engagement of the viewer with an animated movie, virtual environment or game. In this paper, we focus in particular on the problem of motion cloning, i. e., where the motion from one person is used to animate more than one virtual character model. Using our database of motions captured from 83 actors (45M and 38F), we present an experimental framework for evaluating human motion, which allows both the static (e.g., skeletal structure) and dynamic aspects (e.g., walking style) of an animation to be controlled. This framework enables the creation of crowd scenarios using captured human motions, thereby generating simulations similar to those found in commercial games and movies, while allowing full control over the parameters that affect the perceived variety of the individual motions in a crowd. We use the framework to perform an experiment on the perception of characteristic walking motions in a crowd, and conclude that the minimum number of individual motions needed for a crowd to look varied could be as low as three. While the focus of this paper was on the dynamic aspects of animation, our framework is general enough to be used to explore a much wider range of factors that affect the perception of characteristic human motion.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"35 1","pages":"87-92"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81002940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Egocentric distance perception in HMD-based virtual environments 基于hmd的虚拟环境中的自我中心距离感知
Qiufeng Lin, Xianshi Xie, Aysu Erdemir, G. Narasimham, T. McNamara, J. Rieser, Bobby Bodenheimer
{"title":"Egocentric distance perception in HMD-based virtual environments","authors":"Qiufeng Lin, Xianshi Xie, Aysu Erdemir, G. Narasimham, T. McNamara, J. Rieser, Bobby Bodenheimer","doi":"10.1145/2077451.2077486","DOIUrl":"https://doi.org/10.1145/2077451.2077486","url":null,"abstract":"We conducted a followup experiment to the work of Lin et al. [2011]. The experimental protocol was the same as that of Experiment Four in Lin et al. [2011] except the viewing condition was binocular instead of monocular. In that work there was no distance underestimation, as has been widely reported elsewhere, and we were motivated in this experiment to see if stereoscopic effects in head-mounted displays (HMDs) accounted for this effect.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"20 1","pages":"123"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87794578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Differentiating aggregate gaze distributions 区分聚合注视分布
Thomas Grindinger, A. Duchowski, P. Orero
{"title":"Differentiating aggregate gaze distributions","authors":"Thomas Grindinger, A. Duchowski, P. Orero","doi":"10.1145/2077451.2077473","DOIUrl":"https://doi.org/10.1145/2077451.2077473","url":null,"abstract":"A machine learning approach used to classify aggregate gaze distributions recorded by an eye tracker and visualized as heatmaps is demonstrated to successfully discriminate between free and task-driven exploration of video clips.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"47 1","pages":"110"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85893352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of video artifact perception using event-related potentials 利用事件相关电位评价视频伪影感知
Lea Lindemann, S. Wenger, M. Magnor
{"title":"Evaluation of video artifact perception using event-related potentials","authors":"Lea Lindemann, S. Wenger, M. Magnor","doi":"10.1145/2077451.2077461","DOIUrl":"https://doi.org/10.1145/2077451.2077461","url":null,"abstract":"When new computer graphics algorithms for image and video editing, rendering or compression are developed, the quality of the results has to be evaluated and compared. Since the produced media are usually to be presented to an audience it is important to predict image and video quality as it would be perceived by a human observer. This can be done by applying some image quality metric or by expensive and time consuming user studies. Typically, statistical image quality metrics do not correlate to quality perceived by a human observer. More sophisticated HVS-inspired algorithms often do not generalize to arbitrary images. A drawback of user studies is that perceived image or video quality is filtered by a decision process, which, in turn, may be influenced by the performed task and chosen quality scale. To get an objective view on (subjectively) perceived image quality, electroencephalography can be used. In this paper we show that artifacts appearing in videos elicit a measurable brain response which can be analyzed using the event-related potentials technique. Since electroencephalography itself requires an elaborate procedure, we aim to find a minimal setup to reduce time and participants needed to conduct a reliable study of image and video quality. As a first step we demonstrate that the reaction to a video with or without an artifact can be identified by an off-the-shelf support vector machine, which is trained on a set of previously recorded responses, with a reliability of up to 80% from a single recorded electroencephalogram.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"31 1","pages":"53-58"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89265840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Using eye-tracking to assess different image retargeting methods 利用眼动追踪评估不同的图像重定位方法
Susana Castillo, Tilke Judd, D. Gutierrez
{"title":"Using eye-tracking to assess different image retargeting methods","authors":"Susana Castillo, Tilke Judd, D. Gutierrez","doi":"10.1145/2077451.2077453","DOIUrl":"https://doi.org/10.1145/2077451.2077453","url":null,"abstract":"Assessing media retargeting results is not a trivial issue. When resizing one image to a particular percentage of its original size, some content has to be removed, which may affect the image's original meaning and/or composition. We examine the impact of the retargeting process on human fixations, by gathering eye-tracking data for a representative benchmark of retargeted images. We compute their derived saliency maps as input to a set of computational image distance metrics. When analyzing the fixations, we found that even strong artifacts may go unnoticed for areas outside the original regions of interest. We also note that the most important alterations in semantics are due to content removal. Since using an eye tracker is not always a feasible option, we additionally show how an existing model of prediction of human fixations also works sufficiently well in a retargeting context.","PeriodicalId":89458,"journal":{"name":"Proceedings APGV : ... Symposium on Applied Perception in Graphics and Visualization. Symposium on Applied Perception in Graphics and Visualization","volume":"8 1","pages":"7-14"},"PeriodicalIF":0.0,"publicationDate":"2011-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90378814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信