{"title":"Evaluation of a bimanual simultaneous 7DOF interaction technique in virtual environments","authors":"Isaac Cho, Z. Wartell","doi":"10.1109/3DUI.2015.7131738","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131738","url":null,"abstract":"This paper introduces our novel bimanual interaction technique, Spindle+Wheel that provides simultaneous 7DOF. Spindle+Wheel takes advantage of greater finger dexterity, the “bandwidth-of-the-fingers” and passive haptics, by using a pair of precision-grasp 6DOF isotonic input devices rather than using either a tracked pair of pinch gloves or a pair of power-grasped 6DOF isotonic input devices. Two user studies were conducted to show that our simultaneous 7DOF interaction technique outperforms a previous two-handed technique as well as a one-handed scene-in-hand technique for a 7DOF travel task.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124254795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qi Sun, Seyedkoosha Mirhosseini, Ievgeniia Gutenko, Ji Hwan Park, C. Papadopoulos, B. Laha, A. Kaufman
{"title":"Buyers satisfaction in a virtual fitting room scenario based on realism of avatar","authors":"Qi Sun, Seyedkoosha Mirhosseini, Ievgeniia Gutenko, Ji Hwan Park, C. Papadopoulos, B. Laha, A. Kaufman","doi":"10.1109/3DUI.2015.7131761","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131761","url":null,"abstract":"With the rapid development and wide-spread availability of hand-held market-level 3D scanner, character modeling has recently gained focus in both academia and industry. Virtual shopping applications have been widely-used in e-business. We present our parameter-based human avatar generation system and ongoing work on expanding the virtual shopping to the immersive virtual reality platforms employing natural user interfaces. We discuss ideas to evaluate buyer satisfaction using our system.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124609252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Utilization of variation in stereoscopic depth for encoding aspects of non-spatial data","authors":"Ragaad Altarawneh, S. Humayoun, A. Ebert","doi":"10.1109/3DUI.2015.7131741","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131741","url":null,"abstract":"In this paper, we present our experience in utilizing the stereoscopic depth for highlighting the structural relations in compound graphs. In this regard, we use the stereoscopic depth to encode the different levels-of-details in compound graphs and the interaction operations provided by the ExpanD technique for expanding or contracting the nodes in order to align graph nodes in the 3D space with minimum occlusion. We conducted a controlled evaluation study where we invited 30 participants to evaluate the approach using different configurations with different graph sizes and different visual clues. The aim of the study was to understand the viewers' ability in detecting the variations in the stereoscopic depth. The study results show that stereoscopic depth can be used to encode data aspects of graphs under certain circumstances.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114474359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A robust inside-out approach for 3D interaction with large displays","authors":"D. Scherfgen, R. Herpers, T. Saitov","doi":"10.1109/3DUI.2015.7131739","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131739","url":null,"abstract":"In this contribution, we present several improvements to previous “inside-out” techniques for pointing interaction with large display systems. Fiducial markers are virtually projected from an interaction device's built-in camera onto the displays and overlaid to the display content. We reconstruct the 6-DoF camera pose by tracking these markers in real-time. For increased robustness, the marker pattern is dynamically adapted. We address display lag and high pixel response times by precisely timing image captures. Pointing locations are measured with sub-millimeter precision and camera positions with sub-centimeter precision. An update rate of 60 Hz and a latency of 24 ms were achieved. Our technique performed comparably to an OptiTrack system in 2D target selection tasks.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122482627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Serafin, Amalia de Götzen, Smilen Dimitrov, Steven Gelineck, Cumhur Erkut, N. C. Nilsson, F. Grani, R. Nordahl, S. Trento
{"title":"The digital Intonarumori","authors":"S. Serafin, Amalia de Götzen, Smilen Dimitrov, Steven Gelineck, Cumhur Erkut, N. C. Nilsson, F. Grani, R. Nordahl, S. Trento","doi":"10.1109/3DUI.2015.7131773","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131773","url":null,"abstract":"We propose a reconstruction of Russolo's original Intonarumori instruments which uses modern physical and virtual prototyping techniques such as laser cutting, sensors technologies and sound synthesis algorithms. Compared to the original instruments by Russolo, the different sonorities are produced in software, instead of physically changing the material of the vibrating strings and wheels present inside the instrument. This approach provides flexibility while maintaining playing style, sonorities and physicality of the original instruments proposed by Russolo.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116907416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyungyoon Kim, Bret Jackson, Ioannis Karamouzas, M. Adeagbo, S. Guy, Richard Graff, Daniel F. Keefe
{"title":"Bema: A multimodal interface for expert experiential analysis of political assemblies at the Pnyx in ancient Greece","authors":"Kyungyoon Kim, Bret Jackson, Ioannis Karamouzas, M. Adeagbo, S. Guy, Richard Graff, Daniel F. Keefe","doi":"10.1109/3DUI.2015.7131720","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131720","url":null,"abstract":"We present Bema, a multimodal user interface that enables scholars of Greek rhetoric and oratory to perform virtual reality studies of ancient political assemblies at the hill of the Pnyx. Named after the flat stone speakers' platform utilized at the Pnyx, the Bema interface supports the high-level task of gaining a nuanced understanding of what it would feel like to give or receive a speech together with as many as 14,000 Athenian citizens and, further, how this experience must have changed as a result of at least two massive renovations captured in the archaeological record. Bema integrates solutions for several low-level interaction tasks, including navigating in virtual space and time, adjusting data visualization parameters, interacting with virtual characters, and analyzing spatial audio and architecture. Navigation is accomplished through a World-in-Miniature technique, re-conceived to support multi-touch input within a 4-wall Cave environment. Six degree-of-freedom head tracking and a sound level meter are used to analyze speeches delivered by users. Comparative analysis of different historical phases and assembly sizes is facilitated by the use of crowd simulation to generate realistic spatial arrangements for the assemblymen and staged animated transitions that preserve context while comparing two or more scenarios. An evaluation with our team's scholar of ancient Greek rhetoric and oratory provides support for the most important design decisions and affirms the value of this user interface for experiential analysis.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125984226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mahdi Nabiyouni, S. Scerbo, Vincent DeVito, Stefan Smolen, Patrick Starrin, D. Bowman
{"title":"Design and evaluation of a visual acclimation aid for a semi-natural locomotion device","authors":"Mahdi Nabiyouni, S. Scerbo, Vincent DeVito, Stefan Smolen, Patrick Starrin, D. Bowman","doi":"10.1109/3DUI.2015.7131718","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131718","url":null,"abstract":"One of the limitations of most virtual reality (VR) systems is that users cannot physically walk through large virtual environments. Many solutions have been proposed to this problem, including locomotion devices such as the Virtusphere. Such devices allow the user to employ moderately natural walking motions without physically moving through space, but may actually be difficult to use at first due to a lack of interaction fidelity. We designed and evaluated a visual aid that shows a virtual representation of the sphere to the user during an acclimation phase, reasoning that this would help users understand the forces they were feeling, plan their movements, and better control their movements. In a user study, we evaluated participants' walking performance both during and after an acclimation phase. Half of the participants used the visual aid during acclimation, while the other half had no visual aid. After acclimation, all participants performed more complex walking assessment tasks without any visual aid. The results demonstrate that use of the visual aid during acclimation was effective for improving task performance and decreasing perceived difficulty in the assessment tasks.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127879071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Chapoulie, Theophanis Tsandilas, L. Oehlberg, W. Mackay, G. Drettakis
{"title":"Finger-based manipulation in immersive spaces and the real world","authors":"E. Chapoulie, Theophanis Tsandilas, L. Oehlberg, W. Mackay, G. Drettakis","doi":"10.1109/3DUI.2015.7131734","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131734","url":null,"abstract":"Immersive environments that approximate natural interaction with physical 3D objects are designed to increase the user's sense of presence and improve performance by allowing users to transfer existing skills and expertise from real to virtual environments. However, limitations of current Virtual Reality technologies, e.g., low-fidelity real-time physics simulations and tracking problems, make it difficult to ascertain the full potential of finger-based 3D manipulation techniques. This paper decomposes 3D object manipulation into the component movements, taking into account both physical constraints and mechanics. We fabricate five physical devices that simulate these movements in a measurable way under experimental conditions. We then implement the devices in an immersive environment and conduct an experiment to evaluate direct finger-based against ray-based object manipulation. The key contribution of this work is the careful design and creation of physical and virtual devices to study physics-based 3D object manipulation in a rigorous manner in both real and virtual setups.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131660065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrea Martini, L. Colizzi, Francesco Chionna, F. Argese, M. Bellone, P. Cirillo, Vito Palmieri
{"title":"A novel 3D user interface for the immersive design review","authors":"Andrea Martini, L. Colizzi, Francesco Chionna, F. Argese, M. Bellone, P. Cirillo, Vito Palmieri","doi":"10.1109/3DUI.2015.7131757","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131757","url":null,"abstract":"This work describes a novel hardware/software platform dedicated to design review process simplification, using immersive reality technologies. The proposed platform allows designers to interface with CAD engines and visualize different data types simultaneously into an immersive and stereoscopic multi-view visualization system. This research focuses on the development of a novel immersive user interface, a smart 3D disk with a set of widgets. During the immersive sessions, the user can activate functionalities using cost-effective pointing devices and the conceived 3dUIs projected into the virtual environment. In this way, it is possible to easily manipulate virtual objects, perform basic operations such as rotations and translations but also more complex CAD functionalities such as surfaces shape modification. Each feature can be selected inside the virtual world using the smart 3D disk. Users evaluations show that the use of a virtual environment may enhance the perception of designers ideas during the design process and the use of smart 3D interfaces simplifies the interaction among user and virtual objects.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128287922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hideaki Uchiyama, S. Haruyama, Atsushi Shimada, H. Nagahara, R. Taniguchi
{"title":"Spatially-multiplexed MIMO markers","authors":"Hideaki Uchiyama, S. Haruyama, Atsushi Shimada, H. Nagahara, R. Taniguchi","doi":"10.1109/3DUI.2015.7131765","DOIUrl":"https://doi.org/10.1109/3DUI.2015.7131765","url":null,"abstract":"We present spatially-multiplexed fiducial markers with the framework of code division multiple access (CDMA), which is a technique in the field of communications. Since CDMA based multiplexing is robust to signal noise and interference, multiplexed markers can be demultiplexed under several image noises and transformation. With this framework, we explore the paradigm of multiple-input and multiple-output (MIMO) for fiducial markers so that the data capacity of markers can be improved and different users can receive different data from a multiplexed marker.","PeriodicalId":131267,"journal":{"name":"2015 IEEE Symposium on 3D User Interfaces (3DUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131378456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}