Proceedings of the 2nd ACM symposium on Spatial user interaction最新文献

筛选
英文 中文
T(ether): spatially-aware handhelds, gestures and proprioception for multi-user 3D modeling and animation T(以太):用于多用户3D建模和动画的空间感知手持设备,手势和本体感觉
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659785
David Lakatos, M. Blackshaw, A. Olwal, Zachary Barryte, K. Perlin, H. Ishii
{"title":"T(ether): spatially-aware handhelds, gestures and proprioception for multi-user 3D modeling and animation","authors":"David Lakatos, M. Blackshaw, A. Olwal, Zachary Barryte, K. Perlin, H. Ishii","doi":"10.1145/2659766.2659785","DOIUrl":"https://doi.org/10.1145/2659766.2659785","url":null,"abstract":"T(ether) is a spatially-aware display system for multi-user, collaborative manipulation and animation of virtual 3D objects. The handheld display acts as a window into virtual reality, providing users with a perspective view of 3D data. T(ether) tracks users' heads, hands, fingers and pinching, in addition to a handheld touch screen, to enable rich interaction with the virtual scene. We introduce gestural interaction techniques that exploit proprioception to adapt the UI based on the hand's position above, behind or on the surface of the display. These spatial interactions use a tangible frame of reference to help users manipulate and animate the model in addition to controlling environment properties. We report on initial user observations from an experiment for 3D modeling, which indicate T(ether)'s potential for embodied viewport control and 3D modeling interactions.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128656343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
A self-experimentation report about long-term use of fully-immersive technology 一份关于长期使用完全沉浸式技术的自我实验报告
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659767
Frank Steinicke, G. Bruder
{"title":"A self-experimentation report about long-term use of fully-immersive technology","authors":"Frank Steinicke, G. Bruder","doi":"10.1145/2659766.2659767","DOIUrl":"https://doi.org/10.1145/2659766.2659767","url":null,"abstract":"Virtual and digital worlds have become an essential part of our daily life, and many activities that we used to perform in the real world such as communication, e-commerce, or games, have been transferred to the virtual world nowadays. This transition has been addressed many times by science fic- tion literature and cinematographic works, which often show dystopic visions in which humans live their lives in a virtual reality (VR)-based setup, while they are immersed into a vir- tual or remote location by means of avatars or surrogates. In order to gain a better understanding of how living in such a virtual environment (VE) would impact human beings, we conducted a self-experiment in which we exposed a single participant in an immersive VR setup for 24 hours (divided into repeated sessions of two hours VR exposure followed by ten minutes breaks), which is to our knowledge the longest documented use of an immersive VEs so far. We measured different metrics to analyze how human perception, behav- ior, cognition, and motor system change over time in a fully isolated virtual world.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127127728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Object-based touch manipulation for remote guidance of physical tasks 用于物理任务远程指导的基于对象的触摸操作
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659768
Matt Adcock, Dulitha Ranatunga, Ross T. Smith, B. Thomas
{"title":"Object-based touch manipulation for remote guidance of physical tasks","authors":"Matt Adcock, Dulitha Ranatunga, Ross T. Smith, B. Thomas","doi":"10.1145/2659766.2659768","DOIUrl":"https://doi.org/10.1145/2659766.2659768","url":null,"abstract":"This paper presents a spatial multi-touch system for the remote guidance of physical tasks that uses semantic information about the physical properties of the environment. It enables a remote expert to observe a video feed of the local worker's environment and directly specify object movements via a touch display. Visual feedback for the gestures is displayed directly in the local worker's physical environment with Spatial Augmented Reality and observed by the remote expert through the video feed. A virtual representation of the physical environment is captured with a Kinect that facilitates the context-based interactions. We evaluate two methods of remote worker interaction, object-based and sketch-based, and also investigate the impact of two camera positions, top and side, for task performance. Our results indicate translation and aggregate tasks could be more accurately performed via the object based technique when the top-down camera feed was used. While, in the case of the side on camera view, sketching was faster and rotations were more accurate. We also found that for object-based interactions the top view was better on all four of our measured criteria, while for sketching no significant difference was found between camera views.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129690980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Simulator for developing gaze sensitive environment using corneal reflection-based remote gaze tracker 利用基于角膜反射的远程注视跟踪器开发注视敏感环境的模拟器
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661207
Takashi Nagamatsu, Michiya Yamamoto, G. Rigoll
{"title":"Simulator for developing gaze sensitive environment using corneal reflection-based remote gaze tracker","authors":"Takashi Nagamatsu, Michiya Yamamoto, G. Rigoll","doi":"10.1145/2659766.2661207","DOIUrl":"https://doi.org/10.1145/2659766.2661207","url":null,"abstract":"We describe a simulator for developing a gaze sensitive environment using a corneal reflection-based remote gaze tracker. The simulator can arrange cameras and IR-LEDs in 3D to check the measuring range to suit the target volume prior to implementation. We applied it to a museum showcase and a car.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130761500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Emotional space: understanding affective spatial dimensions of constructed embodied shapes 情感空间:理解所构建的具身形状的情感空间维度
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661208
Edward F. Melcer, K. Isbister
{"title":"Emotional space: understanding affective spatial dimensions of constructed embodied shapes","authors":"Edward F. Melcer, K. Isbister","doi":"10.1145/2659766.2661208","DOIUrl":"https://doi.org/10.1145/2659766.2661208","url":null,"abstract":"We build upon recent research designing a constructive, multi-touch emotional assessment tool and present preliminary qualitative results from a Wizard of Oz study simulating the tool with clay. Our results showed the importance of emotionally contextualized spatial orientations, manipulations, and interactions of real world objects in the constructive process, and led to the identification of two new affective dimensions for the tool.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125504801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Session details: Seeing, walking and being in spatial VEs 会话细节:观看、行走和处于空间ve中
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/3247433
Steven K. Feiner
{"title":"Session details: Seeing, walking and being in spatial VEs","authors":"Steven K. Feiner","doi":"10.1145/3247433","DOIUrl":"https://doi.org/10.1145/3247433","url":null,"abstract":"","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"29 24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128999615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GestureAnalyzer: visual analytics for pattern analysis of mid-air hand gestures 用于空中手势模式分析的可视化分析工具
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659772
Sujin Jang, N. Elmqvist, K. Ramani
{"title":"GestureAnalyzer: visual analytics for pattern analysis of mid-air hand gestures","authors":"Sujin Jang, N. Elmqvist, K. Ramani","doi":"10.1145/2659766.2659772","DOIUrl":"https://doi.org/10.1145/2659766.2659772","url":null,"abstract":"Understanding the intent behind human gestures is a critical problem in the design of gestural interactions. A common method to observe and understand how users express gestures is to use elicitation studies. However, these studies require time-consuming analysis of user data to identify gesture patterns. Also, the analysis by humans cannot describe gestures in as detail as in data-based representations of motion features. In this paper, we present GestureAnalyzer, a system that supports exploratory analysis of gesture patterns by applying interactive clustering and visualization techniques to motion tracking data. GestureAnalyzer enables rapid categorization of similar gestures, and visual investigation of various geometric and kinematic properties of user gestures. We describe the system components, and then demonstrate its utility through a case study on mid-air hand gestures obtained from elicitation studies.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127168148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Visual aids in 3D point selection experiments 视觉辅助3D点选择实验
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659770
Robert J. Teather, W. Stuerzlinger
{"title":"Visual aids in 3D point selection experiments","authors":"Robert J. Teather, W. Stuerzlinger","doi":"10.1145/2659766.2659770","DOIUrl":"https://doi.org/10.1145/2659766.2659770","url":null,"abstract":"We present a study investigating the influence of visual aids on 3D point selection tasks. In a Fitts' law pointing experiment, we compared the effects of texturing, highlighting targets upon being touched, and the presence of support cylinders intended to eliminate floating targets. Results of the study indicate that texturing and support cylinders did not significantly influence performance. Enabling target highlighting increased movement speed, while decreasing error rate. Pointing throughput was unaffected by this speed-accuracy tradeoff. Highlighting also eliminated significant differences between selection coordinate depth deviation and the deviation in the two orthogonal axes.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129662758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
HOBS: head orientation-based selection in physical spaces HOBS:在物理空间中基于头部方向的选择
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659773
Ben Zhang, Yu-Hsiang Chen, Claire Tuna, Achal Dave, Yang Li, Edward A. Lee, Björn Hartmann
{"title":"HOBS: head orientation-based selection in physical spaces","authors":"Ben Zhang, Yu-Hsiang Chen, Claire Tuna, Achal Dave, Yang Li, Edward A. Lee, Björn Hartmann","doi":"10.1145/2659766.2659773","DOIUrl":"https://doi.org/10.1145/2659766.2659773","url":null,"abstract":"Emerging head-worn computing devices can enable interactions with smart objects in physical spaces. We present the iterative design and evaluation of HOBS -- a Head-Orientation Based Selection technique for interacting with these devices at a distance. We augment a commercial wearable device, Google Glass, with an infrared (IR) emitter to select targets equipped with IR receivers. Our first design shows that a naive IR implementation can outperform list selection, but has poor performance when refinement between multiple targets is needed. A second design uses IR intensity measurement at targets to improve refinement. To address the lack of natural mapping of on-screen target lists to spatial target location, our third design infers a spatial data structure of the targets enabling a natural head-motion based disambiguation. Finally, we demonstrate a universal remote control application using HOBS and report qualitative user impressions.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131166963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Proposing a classification model for perceptual target selection on large displays 提出了一种用于大型显示器感知目标选择的分类模型
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661216
Seungjae Oh, Heejin Kim, H. So
{"title":"Proposing a classification model for perceptual target selection on large displays","authors":"Seungjae Oh, Heejin Kim, H. So","doi":"10.1145/2659766.2661216","DOIUrl":"https://doi.org/10.1145/2659766.2661216","url":null,"abstract":"In this research, we propose a linear SVM classification model for perceptual distal target selection on large displays. The model is based on two simple features of users' finger movements reflecting users' visual perception of targets. The model shows the accuracy of 92.78% for predicting an intended target at end point.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123641949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信