Christoph Bichlmeier, E. Euler, T. Blum, Nassir Navab
{"title":"Evaluation of the virtual mirror as a navigational aid for augmented reality driven minimally invasive procedures","authors":"Christoph Bichlmeier, E. Euler, T. Blum, Nassir Navab","doi":"10.1109/ISMAR.2010.5643555","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643555","url":null,"abstract":"The paper presents a user study investigating perceptual effects of a virtual mirror being used as a tangible interactive tool in a medical Augmented Reality scenario. In particular, we evaluated whether an additional perspective provided by the virtual mirror supports instrument guidance in terms of precision and straightness. 31 participants were asked to navigate an endoscopic instrument in a simulated minimally invasive port setting. Results show a significant higher precision of instrument guidance when a virtual mirror provides additional views of the target region.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125952135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Management of tracking for industrial AR setups","authors":"P. Keitler, Benjamin Becker, G. Klinker","doi":"10.1109/ISMAR.2010.5643553","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643553","url":null,"abstract":"The accuracy of a real time tracking system for industrial AR (IAR) applications often needs to comply with production tolerances. Such a system typically incorporates different off-/online devices so that the overall precision and accuracy cannot be trivially stated. Additionally, tracking needs to be flexible to not interfere with existing working processes and it needs to be operated and maintained free of error by on-site personnel who typically have a quality management (QM) background. For the final validation of such a complex tracking setup, empiric testing alone is either too expensive or lacks generality.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133824196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ARCrowd-a tangible interface for interactive crowd simulation","authors":"Feng Zheng, Hongsong Li","doi":"10.1145/1943403.1943482","DOIUrl":"https://doi.org/10.1145/1943403.1943482","url":null,"abstract":"Manipulating a large virtual crowd in an interactive virtual reality environment is a challenging task due to the limitations of the traditional user interface. To address this problem, a tangible interface based on augmented reality (AR) is introduced. Through the tangible AR interface, the user could manipulate the virtual characters directly, or control the crowd behaviors with markers. These markers are used to adjust the environment factors, the decision-making processes of virtual crowds, and their reactions. The AR interface provides more intuitive means of control for the users, promoting the efficiency of human-machine interface.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114198770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of a retroreflective screen on depth perception in a head-mounted projection display","authors":"Rui Zhang, H. Hua","doi":"10.1109/ISMAR.2010.5643561","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643561","url":null,"abstract":"The perceived depth accuracy is a fundamental performance metrics for an augmented reality display system. Many factors may affect the perceived depth of a head-mounted display (HMD), such as display resolution, interpupillary distance (IPD), conflicting depth cues, stereoacuity, and head tracking error. Besides these typical limiting factors to an HMD-type system, the perceived depth through a head-mounted projection display (HMPD) may be further affected by the usage of a retroreflective screen. In this paper, we will evaluate the perceived depth accuracy of an HMPD system using a perceptual depth matching method. The main factor to be investigated in our study is the position of a retroreflective screen relative to the projection image plane, with the projection image plane placed at different distances to the user. Overall, it is found that the ratio of the judged distance to real distance and the standard deviation of the judged distance increase linearly with the reference object distance. A transition effect from depth underestimation to overestimation has been observed at the reference object distance of around 1.4m. The position of a retroreflective screen only has a significant effect on the depth judgment error around this switching point. The paper also analyzes various effects brought by a retroreflective screen on the depth judgment. The depth cue and the image luminance reduction brought by a retroreflective screen could be the main factors that affect the depth judgment accuracy.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123650094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lejing Wang, M. Springer, Tim Hauke Heibel, Nassir Navab
{"title":"Floyd-Warshall all-pair shortest path for accurate multi-marker calibration","authors":"Lejing Wang, M. Springer, Tim Hauke Heibel, Nassir Navab","doi":"10.1109/ISMAR.2010.5643605","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643605","url":null,"abstract":"We propose a novel method to compute the poses of randomly positioned square markers in one world coordinate frame from multiple camera views, by taking the predicted accuracy of the camera pose estimation for each marker into account. The problem of computing the best closed-form solution of the world pose of each marker is modeled as all-pair shortest path problem in graph theory. The computed world poses are further optimized by minimizing the geometric distances in images. Experimental results show that incorporating the predicted accuracy of the pose estimation for each marker yields constant high quality calibration results independent of the order of image sequences compared to cases when this knowledge is not used.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127885577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SnapAR: Storing snapshots for quick viewpoint switching in hand-held augmented reality","authors":"Mengu Sukan, Steven K. Feiner","doi":"10.1109/ISMAR.2010.5643603","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643603","url":null,"abstract":"Many tasks require a user to move between various locations within an environment to get different perspectives. This can take significant time and effort, especially when the user must switch among those viewpoints repeatedly. We explore augmented reality interaction techniques that involve taking still pictures of a physical scene using a tracked hand-held magic lens and seamlessly switching between augmenting either the live view or one of the still views, without needing to physically revisit the snapshot locations. We describe our optical-marker-tracking-based implementation and how we represent and switch among snapshots. To determine the effectiveness of our techniques, we developed a test application that lets its user view physical and virtual objects from different viewpoints.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121331439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Range-finding projectors: Visualizing range information without sensors","authors":"S. Kagami","doi":"10.1109/ISMAR.2010.5643586","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643586","url":null,"abstract":"We describe sensorless visualizing techniques of 3D range information of object surfaces, in which the range information is projected directly onto the real-world surface by projectors. The proposed system consists of two projectors that merely project still images. Two techniques, namely, contour map projection based on the moire method and pseudo-color map projection based on complementary colors, are shown to be used simultaneously.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129533009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PoP-EYE environment: Mixed Reality using 3D Photo Collections","authors":"F. Nagl, P. Grimm, Bastian Birnbach, D. F. Abawi","doi":"10.1109/ISMAR.2010.5643594","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643594","url":null,"abstract":"In many application areas such as urban planning or in the interior design, excellent planning tools do exist. These tools often offer 3D visualizations. For a more realistic perception and a better acceptance of these 3D visualizations, it is desirable, to integrate these into real environments. Nowadays, this is complex and time consuming, e.g. either good experience with Photoshop or a high effort to build a Mixed Reality application is necessary. In addition, it is even more expensive to create a MR application with such a quality, that the user can not distinguish real objects of his surrounding area and embedded virtual objects.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"15 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120846159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advanced self-contained object removal for realizing real-time Diminished Reality in unconstrained environments","authors":"Jan Herling, W. Broll","doi":"10.1109/ISMAR.2010.5643572","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643572","url":null,"abstract":"While Augmented Reality has always been restricted to adding artificial content to the real environment, Diminished Reality allows for removing real world content. Existing approaches however, either require complex setups or are not applicable in real-time. In this paper we present our approach for removing real-world objects from a live video stream of the user's real environment. Our approach is based on a simple setup and neither requires any pre-processing nor any information on the structure and location of the objects to be removed or on their background. Our approach is based on the identification of the objects to be removed combined with an image completion and synthesis algorithm. The performance of our approach is one to two magnitudes better than that of previous work in the area of image completion, providing real-time object cancellation on standard laptop or tablet computers.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116785383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Gay-Bellile, P. Lothe, S. Bourgeois, E. Royer, S. Naudet-Collette
{"title":"Augmented reality in large environments: Application to aided navigation in urban context","authors":"V. Gay-Bellile, P. Lothe, S. Bourgeois, E. Royer, S. Naudet-Collette","doi":"10.1109/ISMAR.2010.5643579","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643579","url":null,"abstract":"This paper addresses the challenging issue of vision-based localization in urban context. It briefly describes our contributions in large environments modeling and accurate camera localization. The efficiency of the resulting system is illustrated through Augmented Reality results on large trajectory of several hundred meters.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121970011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}