Proceedings of the 2nd ACM symposium on Spatial user interaction最新文献

筛选
英文 中文
Designing the user in user interfaces 在用户界面中设计用户
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-05 DOI: 10.1145/2659766.2642919
M. Bolas
{"title":"Designing the user in user interfaces","authors":"M. Bolas","doi":"10.1145/2659766.2642919","DOIUrl":"https://doi.org/10.1145/2659766.2642919","url":null,"abstract":"In the good old days, the human was here, the computer there, and a good living was to be made by designing ways to interface between the two. Now we find ourselves unthinkingly pinching to zoom in on a picture in a paper magazine. User interfaces are changing instinctual human behavior and instinctual human behavior is changing user interfaces. We point or look left in the \"virtual\" world just as we point or look left in the physical. It is clear that nothing is clear anymore: the need for \"interface\" vanishes when the boundaries between the physical and the virtual disappear. We are at a watershed moment when to experience being human means to experience being machine. When there is not a user interface - it is just what you do. When instinct supplants mice and menus and the interface insinuates itself into the human psyche. We are redefining and creating what it means to be human in this new physical/virtual integrated reality - we are not just designing user interfaces, we are designing users.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126114710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Exploring gestural interaction in smart spaces using head mounted devices with ego-centric sensing 利用以自我为中心的传感头戴式设备探索智能空间中的手势交互
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659781
Barry Kollee, Sven G. Kratz, Anthony Dunnigan
{"title":"Exploring gestural interaction in smart spaces using head mounted devices with ego-centric sensing","authors":"Barry Kollee, Sven G. Kratz, Anthony Dunnigan","doi":"10.1145/2659766.2659781","DOIUrl":"https://doi.org/10.1145/2659766.2659781","url":null,"abstract":"It is now possible to develop head-mounted devices (HMDs) that allow for ego-centric sensing of mid-air gestural input. Therefore, we explore the use of HMD-based gestural input techniques in smart space environments. We developed a usage scenario to evaluate HMD-based gestural interactions and conducted a user study to elicit qualitative feedback on several HMD-based gestural input techniques. Our results show that for the proposed scenario, mid-air hand gestures are preferred to head gestures for input and rated more favorably compared to non-gestural input techniques available on existing HMDs. Informed by these study results, we developed a prototype HMD system that supports gestural interactions as proposed in our scenario. We conducted a second user study to quantitatively evaluate our prototype comparing several gestural and non-gestural input techniques. The results of this study show no clear advantage or disadvantage of gestural inputs vs.~non-gestural input techniques on HMDs. We did find that voice control as (sole) input modality performed worst compared to the other input techniques we evaluated. Lastly, we present two further applications implemented with our system, demonstrating 3D scene viewing and ambient light control. We conclude by briefly discussing the implications of ego-centric vs. exo-centric tracking for interaction in smart spaces.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125580362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Making VR work: building a real-world immersive modeling application in the virtual world 让VR工作:在虚拟世界中构建一个真实世界的沉浸式建模应用程序
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659780
M. Mine, A. Yoganandan, D. Coffey
{"title":"Making VR work: building a real-world immersive modeling application in the virtual world","authors":"M. Mine, A. Yoganandan, D. Coffey","doi":"10.1145/2659766.2659780","DOIUrl":"https://doi.org/10.1145/2659766.2659780","url":null,"abstract":"Building a real-world immersive 3D modeling application is hard. In spite of the many supposed advantages of working in the virtual world, users quickly tire of waving their arms about and the resulting models remain simplistic at best. The dream of creation at the speed of thought has largely remained unfulfilled due to numerous factors such as the lack of suitable menu and system controls, inability to perform precise manipulations, lack of numeric input, challenges with ergonomics, and difficulties with maintaining user focus and preserving immersion. The focus of our research is on the building of virtual world applications that can go beyond the demo and can be used to do real-world work. The goal is to develop interaction techniques that support the richness and complexity required to build complex 3D models, yet minimize expenditure of user energy and maximize user comfort. We present an approach that combines the natural and intuitive power of VR interaction, the precision and control of 2D touch surfaces, and the richness of a commercial modeling package. We also discuss the benefits of collocating 2D touch with 3D bimanual spatial input, the challenges in designing a custom controller targeted at achieving the same, and the new avenues that this collocation creates.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126781924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Measurements of operating time in first and third person views using video see-through HMD 测量操作时间在第一和第三人称视图使用视频透视HMD
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661204
T. Koike
{"title":"Measurements of operating time in first and third person views using video see-through HMD","authors":"T. Koike","doi":"10.1145/2659766.2661204","DOIUrl":"https://doi.org/10.1145/2659766.2661204","url":null,"abstract":"We measured the operation times of two tasks using video a transparent video head mounted display (HMD) in first and third person views.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126209410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real-time sign language recognition using RGBD stream: spatial-temporal feature exploration 基于RGBD流的实时手语识别:时空特征探索
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661214
Fuyang Huang, Zelong Sun, Q. Xu, F. Sze, Tang Wai Lan, Xiaogang Wang
{"title":"Real-time sign language recognition using RGBD stream: spatial-temporal feature exploration","authors":"Fuyang Huang, Zelong Sun, Q. Xu, F. Sze, Tang Wai Lan, Xiaogang Wang","doi":"10.1145/2659766.2661214","DOIUrl":"https://doi.org/10.1145/2659766.2661214","url":null,"abstract":"We propose a novel spatial-temporal feature set for sign language recognition, wherein we construct explicit spatial and temporal features that capture both hand movement and hand shape. Experimental results show that the proposed solution outperforms existing one in terms of accuracy.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121788488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
RUIS: a toolkit for developing virtual reality applications with spatial interaction RUIS:一个用于开发具有空间交互的虚拟现实应用程序的工具包
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/2659766.2659774
Tuukka M. Takala
{"title":"RUIS: a toolkit for developing virtual reality applications with spatial interaction","authors":"Tuukka M. Takala","doi":"10.1145/2659766.2659774","DOIUrl":"https://doi.org/10.1145/2659766.2659774","url":null,"abstract":"We introduce Reality-based User Interface System (RUIS), a virtual reality (VR) toolkit aimed for students and hobbyists, which we have used in an annually organized VR course for the past four years. RUIS toolkit provides 3D user interface building blocks for creating immersive VR applications with spatial interaction and stereo 3D graphics, while supporting affordable VR peripherals like Kinect, PlayStation Move, Razer Hydra, and Oculus Rift. We describe a novel spatial interaction scheme that combines freeform, full-body interaction with traditional video game locomotion, which can be easily implemented with RUIS. We also discuss the specific challenges associated with developing VR applications, and how they relate to the design principles behind RUIS. Finally, we validate our toolkit by comparing development difficulties experienced by users of different software toolkits, and by presenting several VR applications created with RUIS, demonstrating a variety of spatial user interfaces that it can produce.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129974582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
HoloLeap: towards efficient 3D object manipulation on light field displays HoloLeap:在光场显示器上实现高效的3D对象操作
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661223
V. K. Adhikarla, Paweł W. Woźniak, Robert J. Teather
{"title":"HoloLeap: towards efficient 3D object manipulation on light field displays","authors":"V. K. Adhikarla, Paweł W. Woźniak, Robert J. Teather","doi":"10.1145/2659766.2661223","DOIUrl":"https://doi.org/10.1145/2659766.2661223","url":null,"abstract":"We present HoloLeap, which uses a Leap Motion controller for 3D model manipulation on a light field display (LFD). Like autostereo displays, LFDs support glasses-free 3D viewing. Unlike autostereo displays, LFDs automatically accommodate multiple viewpoints without the need of additional tracking equipment. We describe a gesture-based object manipulation that enables manipulation of 3D objects with 7DOFs by leveraging natural and familiar gestures. We provide an overview of research questions aimed at optimizing gestural input on light field displays.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124559863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Augmenting views on large format displays with tablets 用平板电脑增强大格式显示器上的视图
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661227
Phil Lindner, Adolfo Rodriguez, T. Uram, M. Papka
{"title":"Augmenting views on large format displays with tablets","authors":"Phil Lindner, Adolfo Rodriguez, T. Uram, M. Papka","doi":"10.1145/2659766.2661227","DOIUrl":"https://doi.org/10.1145/2659766.2661227","url":null,"abstract":"Large format displays are commonplace for viewing large scientific datasets. These displays often find their way into collaborative spaces, allowing for multiple individuals to be collocated with the display, though multi-modal interaction with the displayed content remains a challenge. We have begun development of a tablet-based interaction mode for use with large format displays to augment these workspaces.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114836935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depth cues and mouse-based 3D target selection 深度线索和基于鼠标的3D目标选择
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661221
Robert J. Teather, W. Stuerzlinger
{"title":"Depth cues and mouse-based 3D target selection","authors":"Robert J. Teather, W. Stuerzlinger","doi":"10.1145/2659766.2661221","DOIUrl":"https://doi.org/10.1145/2659766.2661221","url":null,"abstract":"We investigated mouse-based 3D selection using one-eyed cursors, evaluating stereo and head-tracking. Stereo cursors significantly reduced performance for targets at different depths, but the one-eyed cursor yielded some discomfort.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116897570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Projection augmented physical visualizations 投影增强物理可视化
Proceedings of the 2nd ACM symposium on Spatial user interaction Pub Date : 2014-10-04 DOI: 10.1145/2659766.2661210
Simon Stusak, M. Teufel
{"title":"Projection augmented physical visualizations","authors":"Simon Stusak, M. Teufel","doi":"10.1145/2659766.2661210","DOIUrl":"https://doi.org/10.1145/2659766.2661210","url":null,"abstract":"Physical visualizations are an emergent area of research and appear in increasingly diverse forms. While they provide an engaging way of data exploration, they are often limited by a fixed representation and lack interactivity. In this work we discuss our early approaches and experiences in combining physical visualizations with spatial augmented reality and present an initial prototype.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129793741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信