Proceedings. IEEE Symposium on 3D User Interfaces最新文献

筛选
英文 中文
Augmented Reality exhibits of constructive art: 8th annual 3DUI Contest 增强现实建筑艺术展览:第八届3DUI年度大赛
Proceedings. IEEE Symposium on 3D User Interfaces Pub Date : 2017-04-05 DOI: 10.1109/3DUI.2017.7893367
Rongkai Guo, Ryan P. McMahan, B. Weyers
{"title":"Augmented Reality exhibits of constructive art: 8th annual 3DUI Contest","authors":"Rongkai Guo, Ryan P. McMahan, B. Weyers","doi":"10.1109/3DUI.2017.7893367","DOIUrl":"https://doi.org/10.1109/3DUI.2017.7893367","url":null,"abstract":"The 8th annual IEEE 3DUI Contest focuses on the development of a 3D User Interface (3DUI) for an Augmented Reality (AR) exhibit of constructive art. The 3DUI Contest is part of the 2017 IEEE Symposium on 3D User Interfaces held in Los Angeles, California. The contest was open to anyone interested in 3DUIs, from researchers to students, enthusiasts, and professionals. The purpose of the contest is to stimulate innovative and creative solutions to challenging 3DUI problems.","PeriodicalId":90698,"journal":{"name":"Proceedings. IEEE Symposium on 3D User Interfaces","volume":"52 3 1","pages":"253"},"PeriodicalIF":0.0,"publicationDate":"2017-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85381830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Internet of abilities: Human augmentation, and beyond (keynote) 能力互联网:人类增强及超越(主题演讲)
Proceedings. IEEE Symposium on 3D User Interfaces Pub Date : 2017-03-18 DOI: 10.1109/3DUI.2017.7893310
J. Rekimoto
{"title":"Internet of abilities: Human augmentation, and beyond (keynote)","authors":"J. Rekimoto","doi":"10.1109/3DUI.2017.7893310","DOIUrl":"https://doi.org/10.1109/3DUI.2017.7893310","url":null,"abstract":"Traditionally, the field of Human Computer Interaction (HCI) was primarily concerned with designing and investigating interfaces between humans and machines. However, with recent technological advances, the concepts of “enhancing”, “augmenting” or even “re-designing” humans themselves are becoming feasible and serious topics of scientific research as well as engineering development. “Augmented Human” is a term that I use to refer to this overall research direction. Augmented Human introduces a fundamental paradigm shift in HCI: from human-computer-interaction to human-computer-integration, and out abilities will be mutually connected through the networks (what we call IoA, or Internet of Abilities, as the next step of IoT: Internet of Things). In this talk, I will discuss rich possibilities and distinct challenges in enhancing human abilities. I will introduce our recent projects including design of flying cameras as our remote and external eyes, a home appliance that can increase your happiness, an organic physical wall/window that dynamically mediates the environment, and an immersive human-human connection concept called “JackIn.”","PeriodicalId":90698,"journal":{"name":"Proceedings. IEEE Symposium on 3D User Interfaces","volume":"30 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2017-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79581753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Keynote speaker: Getting real 主讲人:现实一点
Proceedings. IEEE Symposium on 3D User Interfaces Pub Date : 2016-03-19 DOI: 10.1109/3DUI.2016.7460022
Steven K. Feiner
{"title":"Keynote speaker: Getting real","authors":"Steven K. Feiner","doi":"10.1109/3DUI.2016.7460022","DOIUrl":"https://doi.org/10.1109/3DUI.2016.7460022","url":null,"abstract":"Wide-FOV, 6DOF-tracked, consumer head-worn displays are no longer just dev kits, bimanual 3D input devices can be found at Best Buy, and full-fledged graphics packages are built into web browsers. Meanwhile, augmented reality is poised to make the return trip from hand-held phones and tablets, back to the head-worn displays in which it was born. 3D is here for real. Yet, we all know how difficult it is to create effective 3D user interfaces. In this talk, I will discuss my thoughts about why this is so, and what we can do about it. I will present some of the research directions that my lab has been exploring, and suggest where I think our field may be headed next.","PeriodicalId":90698,"journal":{"name":"Proceedings. IEEE Symposium on 3D User Interfaces","volume":"1 1","pages":"xi"},"PeriodicalIF":0.0,"publicationDate":"2016-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76572112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3DUIdol - 6th annual 3DUI contest 3DUIdol -第六届年度3DUI比赛
Proceedings. IEEE Symposium on 3D User Interfaces Pub Date : 2015-03-23 DOI: 10.1109/3DUI.2015.7131768
Rongkai Guo, M. Marner, B. Weyers
{"title":"3DUIdol - 6th annual 3DUI contest","authors":"Rongkai Guo, M. Marner, B. Weyers","doi":"10.1109/3DUI.2015.7131768","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131768","url":null,"abstract":"The 6th annual IEEE 3DUI contest focuses on Virtual Music Instruments (VMIs), and on 3D user interfaces for playing them. The Contest is part of the IEEE 2015 3DUI Symposium held in Arles, France. The contest is open to anyone interested in 3D User Interfaces (3DUIs), from researchers to students, enthusiasts, and professionals. The purpose of the contest is to stimulate innovative and creative solutions to challenging 3DUI problems. Due to the recent explosion of affordable and portable 3D devices, this year's contest will be judged live at 3DUI. The judgment will be done by selected 3DUI experts during on-site presentation during the conference. Therefore, contestants are required to bring their systems for live judging and for attendees to experience them.","PeriodicalId":90698,"journal":{"name":"Proceedings. IEEE Symposium on 3D User Interfaces","volume":"173 1","pages":"197"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82910541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Slice-n-Swipe: A free-hand gesture user interface for 3D point cloud annotation 滑动:用于3D点云注释的徒手手势用户界面
Proceedings. IEEE Symposium on 3D User Interfaces Pub Date : 2014-03-29 DOI: 10.1109/3DUI.2014.6798882
F. Bacim, Mahdi Nabiyouni, D. Bowman
{"title":"Slice-n-Swipe: A free-hand gesture user interface for 3D point cloud annotation","authors":"F. Bacim, Mahdi Nabiyouni, D. Bowman","doi":"10.1109/3DUI.2014.6798882","DOIUrl":"https://doi.org/10.1109/3DUI.2014.6798882","url":null,"abstract":"Three-dimensional point clouds are generated by devices such as laser scanners and depth cameras, but their output is a set of unstructured, unlabeled points. Many scenarios require users to identify parts of the point cloud through manual annotation. Inspired by the current generation of “natural user interface” technologies, we present Slice-n-Swipe, a technique for 3D point cloud annotation based on free-hand gesture input. The technique is based on a chef's knife metaphor, and uses progressive refinement to allow the user to specify the points of interest. We demonstrate the Slice-n-Swipe concept with a prototype using the Leap Motion Controller for free-hand gesture input and a 3D mouse for virtual camera control.","PeriodicalId":90698,"journal":{"name":"Proceedings. IEEE Symposium on 3D User Interfaces","volume":"2 1","pages":"185-186"},"PeriodicalIF":0.0,"publicationDate":"2014-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89969758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Touching the Cloud: Bimanual annotation of immersive point clouds 触摸云:沉浸式点云的手工注释
Proceedings. IEEE Symposium on 3D User Interfaces Pub Date : 2014-03-29 DOI: 10.1109/3DUI.2014.6798885
Paul Lubos, Rüdiger Beimler, Markus Lammers, Frank Steinicke
{"title":"Touching the Cloud: Bimanual annotation of immersive point clouds","authors":"Paul Lubos, Rüdiger Beimler, Markus Lammers, Frank Steinicke","doi":"10.1109/3DUI.2014.6798885","DOIUrl":"https://doi.org/10.1109/3DUI.2014.6798885","url":null,"abstract":"In this paper we present “Touching the Cloud”, a bi-manual user interface for the interaction, selection and annotation of immersive point cloud data. With minimal instrumentation, the setup allows a user in an immersive head-mounted display (HMD) environment to naturally interact with point clouds. By tracking the user's hands using an OpenNI sensor and displaying them in the virtual environment (VE), the user can touch the virtual 3D point cloud in midair and transform it with pinch gestures inspired by smartphone-based interaction. In addition, by triggering voice- or button-press-activated commands, the user can select, segment and annotate the immersive point cloud, thereby creating hierarchical exploded view models.","PeriodicalId":90698,"journal":{"name":"Proceedings. IEEE Symposium on 3D User Interfaces","volume":"11 1","pages":"191-192"},"PeriodicalIF":0.0,"publicationDate":"2014-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84272274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
The point walker multi-label approach 点步行者多标签方法
Proceedings. IEEE Symposium on 3D User Interfaces Pub Date : 2014-03-29 DOI: 10.1109/3DUI.2014.6798884
Hernandi Krammes, M. M. Silva, Theodoro Mota, Matheus T. Tura, Anderson Maciel, L. Nedel
{"title":"The point walker multi-label approach","authors":"Hernandi Krammes, M. M. Silva, Theodoro Mota, Matheus T. Tura, Anderson Maciel, L. Nedel","doi":"10.1109/3DUI.2014.6798884","DOIUrl":"https://doi.org/10.1109/3DUI.2014.6798884","url":null,"abstract":"This paper presents a 3D user interface to select and label point sets in a point cloud. A walk-in-place strategy based on a weight platform is used for navigation. Selection is made in two levels of precision. First, a pointing technique is used relying on a smartphone and built-in sensors. Then, an ellipsoidal selection volume is deformed by pinching on the smartphone touchscreen in different orientations. Labels are finally selected by pointing icons and a hierarchy of labels is automatically defined by multiple labeling. Voice is used to create new icons/labels. The paper describes the concepts in our approach and the system implementation.","PeriodicalId":90698,"journal":{"name":"Proceedings. IEEE Symposium on 3D User Interfaces","volume":"1 1","pages":"189-190"},"PeriodicalIF":0.0,"publicationDate":"2014-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89311948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Bi-manual gesture interaction for 3D cloud point selection and annotation using COTS 基于COTS的三维云点选择与标注双手手势交互
Proceedings. IEEE Symposium on 3D User Interfaces Pub Date : 2014-03-29 DOI: 10.1109/3DUI.2014.6798883
M. Cabral, Andre Montes, Olavo Belloc, Rodrigo B. D. Ferraz, F. Ferreira, Fabio Doreto, R. Lopes, M. Zuffo
{"title":"Bi-manual gesture interaction for 3D cloud point selection and annotation using COTS","authors":"M. Cabral, Andre Montes, Olavo Belloc, Rodrigo B. D. Ferraz, F. Ferreira, Fabio Doreto, R. Lopes, M. Zuffo","doi":"10.1109/3DUI.2014.6798883","DOIUrl":"https://doi.org/10.1109/3DUI.2014.6798883","url":null,"abstract":"This paper presents our solution to the 3DUI 2014 Contest which is about selection and annotation of 3D point cloud data. This challenge is a classic problem that, if solved and implemented correctly, can be conveniently useful in a wide range of 3D virtual reality applications and environments. Our approach considers a robust, simple and intuitive solution based on bi-manual interaction gestures. We provide a first-person navigation mode for data exploration, point selection and annotation offering a straightforward and intuitive approach for navigation using one's hands. Using a bi-manual gesture interface, the user is able to perform simple but powerful gestures to navigate through the 3D point cloud data within a 3D modelling tool to explore, select and/or annotate points. The implementation is based on COTS (Commercial OFF the Shelf Systems). For modelling and annotation purposes we adopted a widely available open-source tool for 3D editing called Blender. For gesture recognition we adopted the low cost Leap Motion desktop system. We also performed an informal user study that showed the intuitiveness of our solution: users were able to use our system fairly easily, with a fast learning curve.","PeriodicalId":90698,"journal":{"name":"Proceedings. IEEE Symposium on 3D User Interfaces","volume":"31 1","pages":"187-188"},"PeriodicalIF":0.0,"publicationDate":"2014-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85961376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Redirected Touching: Training and Adaptation in Warped Virtual Spaces. 重定向触摸:弯曲虚拟空间中的训练和适应。
Proceedings. IEEE Symposium on 3D User Interfaces Pub Date : 2013-03-01 DOI: 10.1109/3DUI.2013.6550201
Luv Kohli, Mary C Whitton, Frederick P Brooks
{"title":"Redirected Touching: Training and Adaptation in Warped Virtual Spaces.","authors":"Luv Kohli,&nbsp;Mary C Whitton,&nbsp;Frederick P Brooks","doi":"10.1109/3DUI.2013.6550201","DOIUrl":"https://doi.org/10.1109/3DUI.2013.6550201","url":null,"abstract":"<p><p><i>Redirected Touching</i> is a technique in which virtual space is warped to map many virtual objects onto one real object that serves as a passive haptic prop. Recent work suggests that this mapping can often be predictably unnoticeable and have little effect on task performance. We investigated training and adaptation on a rapid aiming task in a real environment, an unwarped virtual environment, and a warped virtual environment. Participants who experienced a warped virtual space reported an initial strange sensation, but adapted to the warped space after short repeated exposure. Our data indicate that all the virtual training was less effective than real-world training, but after adaptation, participants trained as well in a warped virtual space as in an unwarped one.</p>","PeriodicalId":90698,"journal":{"name":"Proceedings. IEEE Symposium on 3D User Interfaces","volume":"2013 ","pages":"79-86"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/3DUI.2013.6550201","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33003104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
Hook: Heuristics for selecting 3D moving objects in dense target environments Hook:在密集目标环境中选择3D运动物体的启发式方法
Proceedings. IEEE Symposium on 3D User Interfaces Pub Date : 2013-01-01 DOI: 10.1109/3DUI.2013.6550208
Michael Ortega-Binderberger
{"title":"Hook: Heuristics for selecting 3D moving objects in dense target environments","authors":"Michael Ortega-Binderberger","doi":"10.1109/3DUI.2013.6550208","DOIUrl":"https://doi.org/10.1109/3DUI.2013.6550208","url":null,"abstract":"","PeriodicalId":90698,"journal":{"name":"Proceedings. IEEE Symposium on 3D User Interfaces","volume":"691 1","pages":"119-122"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76880339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信