Proceedings of the ACM Symposium on Applied Perception最新文献

筛选
英文 中文
Perception of drowsiness based on correlation with facial image features 基于面部图像特征相关性的困倦感知
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2947705
Yugo Sato, Takuya Kato, N. Nozawa, S. Morishima
{"title":"Perception of drowsiness based on correlation with facial image features","authors":"Yugo Sato, Takuya Kato, N. Nozawa, S. Morishima","doi":"10.1145/2931002.2947705","DOIUrl":"https://doi.org/10.1145/2931002.2947705","url":null,"abstract":"This paper presents a video-based method for detecting drowsiness. Generally, human beings can perceive their fatigue and drowsiness through looking at faces. The ability to perceive the fatigue and the drowsiness has been studied in many ways. The drowsiness detection method based on facial videos has been proposed [Nakamura et al. 2014]. In their method, a set of the facial features calculated with the Computer Vision techniques and the k-nearest neighbor algorithm are applied to classify drowsiness degree. However, the facial features that are ineffective against reproducing the perception of human beings with the machine learning method are not removed. This factor can decrease the detection accuracy.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114833515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Animated versus static views of steady flow patterns 稳定流动模式的动画与静态视图
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2931012
C. Ware, Daniel Bolan, Ricky Miller, D. Rogers, J. Ahrens
{"title":"Animated versus static views of steady flow patterns","authors":"C. Ware, Daniel Bolan, Ricky Miller, D. Rogers, J. Ahrens","doi":"10.1145/2931002.2931012","DOIUrl":"https://doi.org/10.1145/2931002.2931012","url":null,"abstract":"Two experiments were conducted to test the hypothesis that animated representations of vector fields are more effective than common static representations even for steady flow. We compared four flow visualization methods: animated streamlets, animated orthogonal line segments (where short lines were elongated orthogonal to the flow direction but animated in the direction of flow), static equally spaced streamlines, and static arrow grids. The first experiment involved a pattern detection task in which the participant searched for an anomalous flow pattern in a field of similar patterns. The results showed that both the animation methods produced more accurate and faster responses. The second experiment involved mentally tracing an advection path from a central dot in the flow field and marking where the path would cross the boundary of a surrounding circle. For this task the animated streamlets resulted in better performance than the other methods, but the animated orthogonal particles resulted in the worst performance. We conclude with recommendations for the representation of steady flow patterns.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115124303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Evaluating human gaze patterns during grasping tasks: robot versus human hand 在抓取任务中评估人类的凝视模式:机器人与人的手
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2931007
Sai Krishna Allani, Brendan David-John, Javier Ruiz, Saurabh Dixit, Jackson Carter, C. Grimm, Ravi Balasubramanian
{"title":"Evaluating human gaze patterns during grasping tasks: robot versus human hand","authors":"Sai Krishna Allani, Brendan David-John, Javier Ruiz, Saurabh Dixit, Jackson Carter, C. Grimm, Ravi Balasubramanian","doi":"10.1145/2931002.2931007","DOIUrl":"https://doi.org/10.1145/2931002.2931007","url":null,"abstract":"Perception and gaze are an integral part of determining where and how to grasp an object. In this study we analyze how gaze patterns differ when participants are asked to manipulate a robotic hand to perform a grasping task when compared with using their own. We have three findings. First, while gaze patterns for the object are similar in both conditions, participants spent substantially more time gazing at the robotic hand then their own, particularly the wrist and finger positions. Second, We provide evidence that for complex objects (eg, a toy airplane) participants essentially treated the object as a collection of sub-objects. Third, we performed a follow-up study that shows that choosing camera angles that clearly display the features participants spend time gazing at are more effective for determining the effectiveness of a grasp from images. Our findings are relevant both for automated algorithms (where visual cues are important for analyzing objects for potential grasps) and for designing tele-operation interfaces (how best to present the visual data to the remote operator).","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127144412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Perception of lighting and shading for animated virtual characters 感知照明和阴影动画虚拟人物
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2931015
Pisut Wisessing, J. Dingliana, R. Mcdonnell
{"title":"Perception of lighting and shading for animated virtual characters","authors":"Pisut Wisessing, J. Dingliana, R. Mcdonnell","doi":"10.1145/2931002.2931015","DOIUrl":"https://doi.org/10.1145/2931002.2931015","url":null,"abstract":"The design of lighting in Computer Graphics is directly derived from cinematography, and many digital artists follow the conventional wisdom on how lighting is set up to convey drama, appeal, or emotion. In this paper, we are interested in investigating the most commonly used lighting techniques to more formally determine their effect on our perception of animated virtual characters. Firstly, we commissioned a professional animator to create a sequence of dramatic emotional sentences for a typical CG cartoon character. Then, we rendered that character using a range of lighting directions, intensities, and shading techniques. Participants of our experiment rated the emotion, the intensity of the performance, and the appeal of the character. Our results provide new insights into how animated virtual characters are perceived, when viewed under different lighting conditions.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131087709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
The effects of artificially reduced field of view and peripheral frame stimulation on distance judgments in HMDs 人工缩小视野和周边帧刺激对头戴式显示器距离判断的影响
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2931013
Bochao Li, A. Nordman, James W. Walker, S. Kuhl
{"title":"The effects of artificially reduced field of view and peripheral frame stimulation on distance judgments in HMDs","authors":"Bochao Li, A. Nordman, James W. Walker, S. Kuhl","doi":"10.1145/2931002.2931013","DOIUrl":"https://doi.org/10.1145/2931002.2931013","url":null,"abstract":"Numerous studies have reported underestimated egocentric distances in virtual environments through head-mounted displays (HMDs). However, it has been found that distance judgments made through Oculus Rift HMDs are much less compressed, and their relatively high device field of view (FOV) may play an important role. Some studies showed that applying constant white light in viewers' peripheral vision improved their distance judgments through HMDs. In this study, we examine the effects of the device FOV and the peripheral vision by performing a blind walking experiment through an Oculus Rift DK2 HMD with three different conditions. For the BlackFrame condition, we rendered a rectangular black frame to reduce the device field of view of the DK2 HMD to match an NVIS nVisor ST60 HMD. In the WhiteFrame and GreyFrame conditions, we changed the frame color to solid white and middle grey. From the results, we found that the distance judgments made through the black frame were significantly underestimated relative to the WhiteFrame condition. However, no significant differences were observed between the WhiteFrame and GreyFrame conditions. This result provides evidence that the device FOV and peripheral light could influence distance judgments in HMDs, and the degree of influence might not change proportionally with respect to the peripheral light brightness.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127238965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Analyzing gaze synchrony in cinema: a pilot study 分析电影中的凝视同步性:一项初步研究
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2947704
K. Breeden, P. Hanrahan
{"title":"Analyzing gaze synchrony in cinema: a pilot study","authors":"K. Breeden, P. Hanrahan","doi":"10.1145/2931002.2947704","DOIUrl":"https://doi.org/10.1145/2931002.2947704","url":null,"abstract":"Recent advances in personalized displays now allow for the delivery of high-fidelity content only to the most sensitive regions of the visual field, a process referred to as foveation [Guenter et al. 2012]. Because foveated systems require accurate knowledge of gaze location, attentional synchrony is particularly relevant: this is observed when multiple viewers attend to the same image region concurrently.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116884916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An empirical evaluation of visuo-haptic feedback on physical reaching behaviors during 3D interaction in real and immersive virtual environments 真实和沉浸式虚拟环境中三维交互过程中物理触达行为的视触觉反馈的实证评估
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2963135
Elham Ebrahimi, Sabarish V. Babu, C. Pagano, S. Jörg
{"title":"An empirical evaluation of visuo-haptic feedback on physical reaching behaviors during 3D interaction in real and immersive virtual environments","authors":"Elham Ebrahimi, Sabarish V. Babu, C. Pagano, S. Jörg","doi":"10.1145/2931002.2963135","DOIUrl":"https://doi.org/10.1145/2931002.2963135","url":null,"abstract":"","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124099236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoupling light reflex from pupillary dilation to measure emotional arousal in videos 解耦光反射与瞳孔扩张测量视频中的情绪唤醒
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2931009
Pallavi Raiturkar, A. Kleinsmith, A. Keil, Arunava Banerjee, Eakta Jain
{"title":"Decoupling light reflex from pupillary dilation to measure emotional arousal in videos","authors":"Pallavi Raiturkar, A. Kleinsmith, A. Keil, Arunava Banerjee, Eakta Jain","doi":"10.1145/2931002.2931009","DOIUrl":"https://doi.org/10.1145/2931002.2931009","url":null,"abstract":"Predicting the exciting portions of a video is a widely relevant problem because of applications such as video summarization, searching for similar videos, and recommending videos to users. Researchers have proposed the use of physiological indices such as pupillary dilation as a measure of emotional arousal. The key problem with using the pupil to measure emotional arousal is accounting for pupillary response to brightness changes. We propose a linear model of pupillary light reflex to predict the pupil diameter of a viewer based only on incident light intensity. The residual between the measured pupillary diameter and the model prediction is attributed to the emotional arousal corresponding to that scene. We evaluate the effectiveness of this method of factoring out pupillary light reflex for the particular application of video summarization. The residual is converted into an exciting-ness score for each frame of a video. We show results on a variety of videos, and compare against ground truth as reported by three independent coders.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124396075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Leveraging gaze data for segmentation and effects on comics 利用注视数据对漫画进行分割和影响
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2947703
Ishwarya Thirunarayanan, S. Koppal, J. Shea, Eakta Jain
{"title":"Leveraging gaze data for segmentation and effects on comics","authors":"Ishwarya Thirunarayanan, S. Koppal, J. Shea, Eakta Jain","doi":"10.1145/2931002.2947703","DOIUrl":"https://doi.org/10.1145/2931002.2947703","url":null,"abstract":"In this work, we present a semi-automatic method based on gaze data to identify the objects in comic images on which digital effects will look best. Our key contribution is a robust technique to cluster the noisy gaze data without having to specify the number of clusters as input. We also present an approach to segment the identified object of interest.","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114619142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is the motion of a child perceivably different from the motion of an adult? 儿童的动作与成人的动作是否有明显的不同?
Proceedings of the ACM Symposium on Applied Perception Pub Date : 2016-07-22 DOI: 10.1145/2931002.2963133
Eakta Jain, Lisa Anthony, Aishat Aloba, Amanda Castonguay, Isabella Cuba, Alex Shaw, Julia Woodward
{"title":"Is the motion of a child perceivably different from the motion of an adult?","authors":"Eakta Jain, Lisa Anthony, Aishat Aloba, Amanda Castonguay, Isabella Cuba, Alex Shaw, Julia Woodward","doi":"10.1145/2931002.2963133","DOIUrl":"https://doi.org/10.1145/2931002.2963133","url":null,"abstract":"","PeriodicalId":102213,"journal":{"name":"Proceedings of the ACM Symposium on Applied Perception","volume":"279 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114025602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信