Daniel Wagner, Gerhard Reitmayr, Alessandro Mulloni, T. Drummond, D. Schmalstieg
{"title":"Pose tracking from natural features on mobile phones","authors":"Daniel Wagner, Gerhard Reitmayr, Alessandro Mulloni, T. Drummond, D. Schmalstieg","doi":"10.1109/ISMAR.2008.4637338","DOIUrl":"https://doi.org/10.1109/ISMAR.2008.4637338","url":null,"abstract":"In this paper we present two techniques for natural feature tracking in real-time on mobile phones. We achieve interactive frame rates of up to 20 Hz for natural feature tracking from textured planar targets on current-generation phones. We use an approach based on heavily modified state-of-the-art feature descriptors, namely SIFT and Ferns. While SIFT is known to be a strong, but computationally expensive feature descriptor, Ferns classification is fast, but requires large amounts of memory. This renders both original designs unsuitable for mobile phones. We give detailed descriptions on how we modified both approaches to make them suitable for mobile phones. We present evaluations on robustness and performance on various devices and finally discuss their appropriateness for augmented reality applications.","PeriodicalId":168134,"journal":{"name":"2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"24 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132881101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christoph Bichlmeier, B. Ockert, S. Heining, A. Ahmadi, Nassir Navab
{"title":"Stepping into the operating theater: ARAV — Augmented Reality Aided Vertebroplasty","authors":"Christoph Bichlmeier, B. Ockert, S. Heining, A. Ahmadi, Nassir Navab","doi":"10.1109/ISMAR.2008.4637348","DOIUrl":"https://doi.org/10.1109/ISMAR.2008.4637348","url":null,"abstract":"Augmented reality (AR) for preoperative diagnostics and planning, intra operative navigation and postoperative follow-up examination has been a topic of intensive research over the last two decades. However, clinical studies showing AR technology integrated into the real clinical environment and workflow are still rare. The incorporation of an AR system as a standard tool into the real clinical workflow has not been presented so far. This paper reports on the strategies and intermediate results of the ARAV - augmented reality aided vertebroplasty project that has been initiated to make an AR system based on a stereo video see-through head mounted display that is permanently available in the operating room (OR).","PeriodicalId":168134,"journal":{"name":"2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128836406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Quarles, S. Lampotang, I. Fischler, P. Fishwick, Benjamin C. Lok
{"title":"Collocated AAR: Augmenting After Action Review with Mixed Reality","authors":"J. Quarles, S. Lampotang, I. Fischler, P. Fishwick, Benjamin C. Lok","doi":"10.1109/ISMAR.2008.4637335","DOIUrl":"https://doi.org/10.1109/ISMAR.2008.4637335","url":null,"abstract":"This paper proposes collocated after action review (AAR) of training experiences. Through mixed reality (MR), collocated AAR allows users to review past training experiences in situ with the userpsilas current, real-world experience. MR enables a user-controlled egocentric viewpoint, a visual overlay of virtual information, and playback of recorded training experiences collocated with the userpsilas current experience. Collocated AAR presents novel challenges for MR, such as collocating time, interactions, and visualizations of previous and current experiences. We created a collocated AAR system for anesthesia education, the augmented anesthesia machine visualization and interactive debriefing system (AAMVID). The system was evaluated in two studies by students (n=19) and educators (n=3). The results demonstrate how collocated AAR systems such as AAMVID can: (1) effectively direct student attention and interaction during AAR and (2) provide novel visualizations of aggregate student performance and insight into student understanding for educators.","PeriodicalId":168134,"journal":{"name":"2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"408 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124332209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Compositing for small cameras","authors":"Georg S. W. Klein, D. W. Murray","doi":"10.1109/ISMAR.2008.4637324","DOIUrl":"https://doi.org/10.1109/ISMAR.2008.4637324","url":null,"abstract":"To achieve a realistic integration of virtual and real imagery in video see-through augmented reality, the rendered images should have a similar appearance and quality to those captured by the video camera. This paper describes a compositing method which models the artefacts produced by a small low-cost camera, and adds these effects to an ideal pinhole image produced by conventional rendering methods. We attempt to model and simulate each step of the imaging process, including distortions, chromatic aberrations, blur, Bayer masking, noise and colour-space compression, all while requiring only an RGBA image and an estimate of camera velocity as inputs.","PeriodicalId":168134,"journal":{"name":"2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124373873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmented reality in-situ 3D model menu for outdoors","authors":"Thuong N. Hoang, B. Thomas","doi":"10.1109/ISMAR.2008.4637358","DOIUrl":"https://doi.org/10.1109/ISMAR.2008.4637358","url":null,"abstract":"We present a design and implementation of an in-situation menu system for loading and visualising 3D models in a physical world context. The menu system uses 3D objects as menu items, and the whole menu is placed within the context of the augmented environment. The use of 3D objects supports the visualisation and placement of 3D models into the augmented world. The menu system employs techniques for the placement of 3D models in two relative coordinate systems: head relative and world relative.","PeriodicalId":168134,"journal":{"name":"2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"66 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134127704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Vesp’R: design and evaluation of a handheld AR device","authors":"Eduardo Veas, E. Kruijff","doi":"10.1109/ISMAR.2008.4637322","DOIUrl":"https://doi.org/10.1109/ISMAR.2008.4637322","url":null,"abstract":"This paper focuses on the design of devices for handheld spatial interaction. In particular, it addresses the requirements and construction of a new platform for interactive AR, described from an ergonomics stance, prioritizing human factors of spatial interaction. The result is a multi-configurable platform for spatial interaction, evaluated in two AR application scenarios. The user tests validate the design with regards to grip, weight balance and control allocation, and provide new insights on the human factors involved in handheld spatial interaction.","PeriodicalId":168134,"journal":{"name":"2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132960667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Supporting order picking with Augmented Reality","authors":"B. Schwerdtfeger, G. Klinker","doi":"10.1109/ISMAR.2008.4637331","DOIUrl":"https://doi.org/10.1109/ISMAR.2008.4637331","url":null,"abstract":"We report on recent progress in the iterative process of exploring, evaluating and refining Augmented Reality-based methods to support the order picking process. We present our findings from three user studies and from demonstrations at several exhibitions. The resulting setup is a combined visualization to precisely and efficiently guide the user, even if the augmentation is not always in the field of view of the HMD.","PeriodicalId":168134,"journal":{"name":"2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134322607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"OutlinAR: an assisted interactive model building system with reduced computational effort","authors":"P. Bunnun, W. Mayol-Cuevas","doi":"10.1109/ISMAR.2008.4637325","DOIUrl":"https://doi.org/10.1109/ISMAR.2008.4637325","url":null,"abstract":"This paper presents a system that allows online building of 3D wireframe models through a combination of user interaction and automated methods from a handheld camera-mouse. Crucially, the model being built is used to concurrently compute camera pose, permitting extendable tracking while enabling the user to edit the model interactively. In contrast to other model building methods that are either off-line and/or automated but computationally intensive, the aim here is to have a system that has low computational requirements and that enables the user to define what is relevant (and what is not) at the time the model is being built. OutlinAR hardware is also developed which simply consists of the combination of a camera with a wide field of view lens and a wheeled computer mouse.","PeriodicalId":168134,"journal":{"name":"2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123851598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using the marginalised particle filter for real-time visual-inertial sensor fusion","authors":"G. Bleser, D. Stricker","doi":"10.1109/ISMAR.2008.4637316","DOIUrl":"https://doi.org/10.1109/ISMAR.2008.4637316","url":null,"abstract":"The use of a particle filter (PF) for camera pose estimation is an ongoing topic in the robotics and computer vision community, especially since the FastSLAM algorithm has been utilised for simultaneous localisation and mapping (SLAM) applications with a single camera. The major problem in this context consists in the poor proposal distribution of the camera pose particles obtained from the weak motion model of a camera moved freely in 3D space. While the FastSLAM 2.0 extension is one possibility to improve the proposal distribution, this paper addresses the question of how to use measurements from low-cost inertial sensors (gyroscopes and accelerometers) to compensate for the missing control information. However, the integration of inertial data requires the additional estimation of sensor biases, velocities and potentially accelerations, resulting in a state dimension, which is not manageable by a standard PF. Therefore, the contribution of this paper consists in developing a real-time capable sensor fusion strategy based upon the marginalised particle filter (MPF) framework. The performance of the proposed strategy is evaluated in combination with a marker-based tracking system and results from a comparison with previous visual-inertial fusion strategies based upon the extended Kalman filter (EKF), the standard PF and the MPF are presented.","PeriodicalId":168134,"journal":{"name":"2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"29 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125861657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An evaluation of graphical context when the graphics are outside of the task area","authors":"C. M. Robertson, B. MacIntyre, B. Walker","doi":"10.1109/ISMAR.2008.4637328","DOIUrl":"https://doi.org/10.1109/ISMAR.2008.4637328","url":null,"abstract":"An ongoing research problem in Augmented Reality (AR) is to improve tracking and display technology in order to minimize registration errors. However, perfect registration is not always necessary for users to understand the intent of an augmentation. This paper describes the results of an experiment to evaluate the effects of graphical context in a Lego block placement task when the graphics are located outside of the task area. Four conditions were compared: fully registered AR; non-registered AR; a heads-up display (HUD) with the graphics always visible in the field of view; and a HUD with the graphics not always visible in the field of view. The results of this experiment indicated that registered AR outperforms both non-registered AR and graphics displayed on a HUD. The results also indicated that non-registered AR does not offer any significant performance advantages over a HUD, but is rated as less intrusive and can keep non-registered graphics from cluttering the task space.","PeriodicalId":168134,"journal":{"name":"2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129821751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}