Proceedings of the ACM Symposium on Applied Perception最新文献

筛选
英文 中文
Psychoacoustic characterization of propagation effects in virtual environments 虚拟环境中传播效应的心理声学表征
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2963134
Atul Rungta, Sarah Rust, Nicolás Morales, R. Klatzky, M. Lin, Dinesh Manocha
{"title":"Psychoacoustic characterization of propagation effects in virtual environments","authors":"Atul Rungta, Sarah Rust, Nicolás Morales, R. Klatzky, M. Lin, Dinesh Manocha","doi":"10.1145/2931002.2963134","DOIUrl":"https://doi.org/10.1145/2931002.2963134","url":null,"abstract":"","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123636705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Looking at faces: autonomous perspective invariant facial gaze analysis 看脸:自主视角不变面部凝视分析
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2931005
Justin K. Bennett, S. Sridharan, Brendan David-John, Reynold J. Bailey
{"title":"Looking at faces: autonomous perspective invariant facial gaze analysis","authors":"Justin K. Bennett, S. Sridharan, Brendan David-John, Reynold J. Bailey","doi":"10.1145/2931002.2931005","DOIUrl":"https://doi.org/10.1145/2931002.2931005","url":null,"abstract":"Eye-tracking provides a mechanism for researchers to monitor where subjects deploy their visual attention. Eye-tracking has been used to gain insights into how humans scrutinize faces, however the majority of these studies were conducted using desktop-mounted eye-trackers where the subject sits and views a screen during the experiment. The stimuli in these experiments are typically photographs or videos of human faces. In this paper we present a novel approach using head-mounted eye-trackers which allows for automatic generation of gaze statistics for tasks performed in real-world environments. We use a trained hierarchy of Haar cascade classifiers to automatically detect and segment faces in the eye-tracker's scene camera video. We can then determine if fixations fall within the bounds of the face or other possible regions of interest and report relevant gaze statistics. Our method is easily adaptable to any feature-trained cascade to allow for rapid object detection and tracking. We compare our results with previous research on the perception of faces in social environments. We also explore correlations between gaze and confidence levels measured during a mock interview experiment.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128771073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
How experts' mental model affects 3D image segmentation 专家的思维模式如何影响3D图像分割
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2948718
Anahita Sanandaji, C. Grimm, Ruth West
{"title":"How experts' mental model affects 3D image segmentation","authors":"Anahita Sanandaji, C. Grimm, Ruth West","doi":"10.1145/2931002.2948718","DOIUrl":"https://doi.org/10.1145/2931002.2948718","url":null,"abstract":"3D image segmentation is a fundamental process in many scientific and medical applications. Automatic algorithms do exist, but there are many use cases where these algorithms fail. The gold standard is still manual segmentation or review. Unfortunately, existing 3D segmentation tools do not currently take into account human mental models, low-level perception actions, and higher-level cognitive tasks. Our goal is to improve the quality and efficiency of manual segmentation by analyzing the process in terms of human mental models and low-level perceptual tasks. Preliminary results from our in-depth field studies suggest that compared to novices, experts have a stronger mental model of the 3D structures they segment. To validate this assumption, we introduce a novel test instrument to explore experts' mental model in the context of 3D image segmentation. We use this test instrument to measure individual differences in various spatial segmentation and visualization tasks. The tasks involve identifying valid 2D contours, slicing planes and 3D shapes.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134153895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Scan path and movie trailers for implicit annotation of videos 扫描路径和电影预告片的隐式注释的视频
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2948723
Pallavi Raiturkar, Andrew Lee, Eakta Jain
{"title":"Scan path and movie trailers for implicit annotation of videos","authors":"Pallavi Raiturkar, Andrew Lee, Eakta Jain","doi":"10.1145/2931002.2948723","DOIUrl":"https://doi.org/10.1145/2931002.2948723","url":null,"abstract":"Affective annotation of videos is important for video understanding, ranking, retrieval, and summarization. We present an approach that uses excerpts that appeared in the official trailers of movies, as training data. Total scan path is computed as a metric for emotional arousal, based on previous eye tracking research. Arousal level on trailer excerpts is modeled as a Gaussian distribution, and signed distance from the mean of this distribution is used to separate out exemplars of high and low emotional arousal in movies.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122911751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Binocular eye tracking calibration during a virtual ball catching task using head mounted display 在虚拟球捕捉任务中使用头戴式显示器进行双目眼动跟踪校准
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2931020
Kamran Binaee, Gabriel J. Diaz, J. Pelz, F. Phillips
{"title":"Binocular eye tracking calibration during a virtual ball catching task using head mounted display","authors":"Kamran Binaee, Gabriel J. Diaz, J. Pelz, F. Phillips","doi":"10.1145/2931002.2931020","DOIUrl":"https://doi.org/10.1145/2931002.2931020","url":null,"abstract":"When tracking the eye movements of an active observer, the quality of the tracking data is continuously affected by physical shifts of the eye-tracker on an observers head. This is especially true for eye-trackers integrated within virtual-reality (VR) helmets. These configurations modify the weight and inertia distribution well beyond that of the eye-tracker alone. Despite the continuous nature of this degradation, it is common practice for calibration procedures to establish eye-to-screen mappings, fixed over the time-course of an experiment. Even with periodic recalibration, data quality can quickly suffer due to head motion. Here, we present a novel post-hoc calibration method that allows for continuous temporal interpolation between discrete calibration events. Analysis focuses on the comparison of fixed vs. continuous calibration schemes and their effects upon the quality of a binocular gaze data to virtual targets, especially with respect to depth. Calibration results were applied to binocular eye tracking data from a VR ball catching task and improved the tracking accuracy especially in the dynamic case.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122933571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Predicting destination using head orientation and gaze direction during locomotion in VR 在VR运动中使用头部方向和凝视方向预测目的地
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2931010
Jonathan Gandrud, V. Interrante
{"title":"Predicting destination using head orientation and gaze direction during locomotion in VR","authors":"Jonathan Gandrud, V. Interrante","doi":"10.1145/2931002.2931010","DOIUrl":"https://doi.org/10.1145/2931002.2931010","url":null,"abstract":"This paper reports preliminary investigations into the extent to which future directional intention might be reliably inferred from head pose and eye gaze during locomotion. Such findings could help inform the more effective implementation of realistic detailed animation for dynamic virtual agents in interactive first-person crowd simulations in VR, as well as the design of more efficient predictive controllers for redirected walking. In three different studies, with a total of 19 participants, we placed people at the base of a T-shaped virtual hallway environment and collected head position, head orientation, and gaze direction data as they set out to perform a hidden target search task across two rooms situated at right angles to the end of the hallway. Subjects wore an nVisorST50 HMD equipped with an Arrington Research ViewPoint eye tracker; positional data were tracked using a 12-camera Vicon MX40 motion capture system. The hidden target search task was used to blind participants to the actual focus of our study, which was to gain insight into how effectively head position, head orientation and gaze direction data might predict people's eventual choice of which room to search first. Our results suggest that eye gaze data does have the potential to provide additional predictive value over the use of 6DOF head tracked data alone, despite the relatively limited field-of-view of the display we used.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129910447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Action coordination with agents: crossing roads with a computer-generated character in a virtual environment 与代理人的行动协调:在虚拟环境中与计算机生成的角色一起过马路
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2931003
Yuanyuan Jiang, E. O'Neal, Pooya Rahimian, Junghum Paul Yon, J. Plumert, J. Kearney
{"title":"Action coordination with agents: crossing roads with a computer-generated character in a virtual environment","authors":"Yuanyuan Jiang, E. O'Neal, Pooya Rahimian, Junghum Paul Yon, J. Plumert, J. Kearney","doi":"10.1145/2931002.2931003","DOIUrl":"https://doi.org/10.1145/2931002.2931003","url":null,"abstract":"We investigated how people jointly coordinate their decisions and actions with a computer-generated character (agent) in a large-screen virtual environment. The task for participants was to physically cross a steady stream of traffic on a virtual road without getting hit by a car. Participants performed this task with another person or with a computer-generated character (Fig. 1). The character was programmed to be either safe (taking only large gaps) or risky (also taking relatively small gaps). We found that participants behaved in many respects similarly with real and virtual partners. They maintained similar distances between themselves and their partner, they often crossed the same gap with their partner, and they synchronized their crossing with their partner. We also found that the riskiness of the character influenced the gap choices of participants. This study demonstrates the potential for using large-screen virtual environments to study how people interact with CG characters when performing whole-body joint actions.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128831777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Measuring viewers' heart rate response to environment conservation videos 测量观众对环保视频的心率反应
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2948724
Pallavi Raiturkar, S. Jacobson, Beida Chen, Kartik Chaturvedi, Isabella Cuba, Andrew Lee, Melissa Franklin, Julian Tolentino, N. Haynes, Rebecca Soodeen, Eakta Jain
{"title":"Measuring viewers' heart rate response to environment conservation videos","authors":"Pallavi Raiturkar, S. Jacobson, Beida Chen, Kartik Chaturvedi, Isabella Cuba, Andrew Lee, Melissa Franklin, Julian Tolentino, N. Haynes, Rebecca Soodeen, Eakta Jain","doi":"10.1145/2931002.2948724","DOIUrl":"https://doi.org/10.1145/2931002.2948724","url":null,"abstract":"Digital media, particularly pictures and videos, have long been used to influence a person's cognition as well as her consequent actions. Previous work has shown that physiological indices such as heart rate variability can be used to measure emotional arousal. We measure heart rate variability as participants watch environment conservation videos. We compare the heart rate response against the pleasantness rating recorded during an independent Internet survey.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133413938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Learning a human-perceived softness measure of virtual 3D objects 学习人类感知的虚拟三维物体的柔软度测量
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2931019
Manfred Lau, K. Dev, Julie Dorsey, H. Rushmeier
{"title":"Learning a human-perceived softness measure of virtual 3D objects","authors":"Manfred Lau, K. Dev, Julie Dorsey, H. Rushmeier","doi":"10.1145/2931002.2931019","DOIUrl":"https://doi.org/10.1145/2931002.2931019","url":null,"abstract":"We introduce the problem of computing a human-perceived softness measure for virtual 3D objects. As the virtual objects do not exist in the real world, we do not directly consider their physical properties but instead compute the human-perceived softness of the geometric shapes. We collect crowdsourced data where humans rank their perception of the softness of vertex pairs on virtual 3D models. We then compute shape descriptors and use a learning-to-rank approach to learn a softness measure mapping any vertex to a softness value. Finally, we demonstrate our framework with a variety of 3D shapes.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131316111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
User sensitivity to speed- and height-mismatch in VR 用户对VR中速度和高度不匹配的敏感度
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2947701
Veronica U. Weser, Joel A. Hesch, Johnny Lee, D. Proffitt
{"title":"User sensitivity to speed- and height-mismatch in VR","authors":"Veronica U. Weser, Joel A. Hesch, Johnny Lee, D. Proffitt","doi":"10.1145/2931002.2947701","DOIUrl":"https://doi.org/10.1145/2931002.2947701","url":null,"abstract":"Facebook's purchase of Oculus VR in 2014 ushered in a new era of consumer virtual reality head-mounted displays (HMDs). Converging technological advancements in small, high-resolution displays and motion-detection devices propelled VR beyond the purview of high-tech research laboratories and into the mainstream. However, technological hurdles still remain. As more consumer grade products develop, user comfort and experience will be of the utmost importance. One of the biggest issues for HMDs that lack external tracking is drift in the user position and rotation sensors. Drift can cause motion sickness and make stationary items in the virtual environment to appear to shift in position. For developers who seek to design VR experiences that are rooted in real environments, drift can create large errors in positional tracking if left uncorrected over time. Although much of the current VR hardware makes use of external tracking devices to mitigate positional and rotational drift, the creation of head-mounted displays that can operate without the use of extremal tracking devices would make VR hardware more portable and flexible, and may therefore be a goal for future development. Until technology advances sufficiently to completely overcome the hardware problems that cause drift, software solutions are a viable option to correct for it. It may be possible to speed up and slow down users as they move though the virtual world in order to bring their tracked position back into alignment with their position in the real world. If speed changes can be implemented without users noticing the alteration, it may offer a seamless solution that does not interfere with the VR experience. In Experiments 1 and 2, we artificially introduced speed changes that made users move through the VR environment either faster than or slower than their actual real-world speed. Users were tasked with correctly identifying when they were moving at the correct true-to-life speed when compared to an altered virtual movement speed. Fore and aft movement and movement from side to side initiated by seated users bending at the waist were tested separately in two experiments. In Experiment 3, we presented alternating views of the virtual scene from different user heights. In this study, users had to correctly distinguish the view of the virtual scene presented at the correct height from incorrect shorter and taller heights. In Experiments 1 and 2, we found that on average speed increases and decreases up to approximately 25% went unnoticed by users, suggesting that there is flexibility for programs to add speed changes imperceptible to users to correct for drift. In contrast, Experiment 3 demonstrates that on average users were aware of height changes after virtual heights were altered by just 5 cm. These thresholds can be used by VR developers to compensate for tracking mismatches between real and virtual positions of users of virtual environments, and also by engineers to benchmark new ","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114403362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信