2015 IEEE International Symposium on Mixed and Augmented Reality最新文献

筛选
英文 中文
[POSTER] RGB-D/C-arm Calibration and Application in Medical Augmented Reality [海报]RGB-D/ c臂标定及其在医疗增强现实中的应用
2015 IEEE International Symposium on Mixed and Augmented Reality Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.31
X. Wang, S. Habert, Meng Ma, C. Huang, P. Fallavollita, N. Navab
{"title":"[POSTER] RGB-D/C-arm Calibration and Application in Medical Augmented Reality","authors":"X. Wang, S. Habert, Meng Ma, C. Huang, P. Fallavollita, N. Navab","doi":"10.1109/ISMAR.2015.31","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.31","url":null,"abstract":"Calibration and registration are the first steps for augmented reality and mixed reality applications. In the medical field, the calibration between an RGB-D camera and a mobile C-arm fluoroscope is a new topic which introduces challenges. In this paper, we propose a precise 3D/2D calibration method to achieve a video augmented fluoroscope. With the design of a suitable calibration phantom for RGB-D/C-arm calibration, we calculate the projection matrix from the depth camera coordinates to the X-ray image. Through a comparison experiment by combining different steps leading to the calibration, we evaluate the effect of every step of our calibration process. Results demonstrated that we obtain a calibration RMS error of 0.54±1.40 mm which is promising for surgical applications. We conclude this paper by showcasing two clinical applications. One is a markerless registration application, the other is an RGB-D camera augmented mobile C-arm visualization.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124708169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Augmented Reality Scout: Joint Unaided-Eye and Telescopic-Zoom System for Immersive Team Training 增强现实侦察兵:用于沉浸式团队训练的联合裸眼和伸缩变焦系统
2015 IEEE International Symposium on Mixed and Augmented Reality Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.11
T. Oskiper, Mikhail Sizintsev, Vlad Branzoi, S. Samarasekera, Rakesh Kumar
{"title":"Augmented Reality Scout: Joint Unaided-Eye and Telescopic-Zoom System for Immersive Team Training","authors":"T. Oskiper, Mikhail Sizintsev, Vlad Branzoi, S. Samarasekera, Rakesh Kumar","doi":"10.1109/ISMAR.2015.11","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.11","url":null,"abstract":"In this paper we present a dual, wide area, collaborative augmented reality (AR) system that consists of standard live view augmentation, e.g., from helmet, and zoomed-in view augmentation, e.g., from binoculars. The proposed advanced scouting capability allows long range high precision augmentation of live unaided and zoomed-in imagery with aerial and terrain based synthetic objects, vehicles, people and effects. The inserted objects must appear stable in the display and not jitter or drift as the user moves around and examines the scene. The AR insertions for the binocs must work instantly when they are picked up anywhere as the user moves around. The design of both AR modules is based on using two different cameras with wide and narrow field of view (FoV) lenses. The wide FoV gives context and enables the recovery of location and orientation of the prop in 6 degrees of freedom (DoF) much more robustly, whereas the narrow FoV is used for the actual augmentation and increased precision in tracking. Furthermore, narrow camera in unaided eye and wide camera on the binoculars are jointly used for global yaw (heading) correction. We present our navigation algorithms using monocular cameras in combination with IMU and GPS in an Extended Kalman Filter (EKF) framework to obtain robust and real-time pose estimation for precise augmentation and cooperative tracking.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129902313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
[POSTER] Abecedary Tracking and Mapping: A Toolkit for Tracking Competitions [海报]基础跟踪和映射:跟踪竞赛的工具包
2015 IEEE International Symposium on Mixed and Augmented Reality Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.63
Hideaki Uchiyama, Takafumi Taketomi, Sei Ikeda, J. P. Lima
{"title":"[POSTER] Abecedary Tracking and Mapping: A Toolkit for Tracking Competitions","authors":"Hideaki Uchiyama, Takafumi Taketomi, Sei Ikeda, J. P. Lima","doi":"10.1109/ISMAR.2015.63","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.63","url":null,"abstract":"This paper introduces a toolkit with camera calibration, monocular visual Simultaneous Localization and Mapping (vSLAM) and registration with a calibration marker. With the toolkit, users can perform the whole procedure of the ISMAR on-site tracking competition in 2015. Since the source code is designed to be well-structured and highly-readable, users can easily install and modify the toolkit. By providing the toolkit, we encourage beginners to learn tracking techniques and to participate in the competition.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124429003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
[POSTER] 2D-3D Co-segmentation for AR-based Remote Collaboration [海报]基于ar的远程协作的2D-3D协同分割
2015 IEEE International Symposium on Mixed and Augmented Reality Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.56
Kuo-Chin Lien, B. Nuernberger, M. Turk, Tobias Höllerer
{"title":"[POSTER] 2D-3D Co-segmentation for AR-based Remote Collaboration","authors":"Kuo-Chin Lien, B. Nuernberger, M. Turk, Tobias Höllerer","doi":"10.1109/ISMAR.2015.56","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.56","url":null,"abstract":"In Augmented Reality (AR) based remote collaboration, a remote user can draw a 2D annotation that emphasizes an object of interest to guide a local user accomplishing a task. This annotation is typically performed only once and then sticks to the selected object in the local user's view, independent of his or her camera movement. In this paper, we present an algorithm to segment the selected object, including its occluded surfaces, such that the 2D selection can be appropriately interpreted in 3D and rendered as a useful AR annotation even when the local user moves and significantly changes the viewpoint.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115340313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
[POSTER] Rubix: Dynamic Spatial Augmented Reality by Extraction of Plane Regions with a RGB-D Camera [海报]Rubix:基于RGB-D相机提取平面区域的动态空间增强现实
2015 IEEE International Symposium on Mixed and Augmented Reality Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.43
Masayuki Sano, Kazuki Matsumoto, B. Thomas, H. Saito
{"title":"[POSTER] Rubix: Dynamic Spatial Augmented Reality by Extraction of Plane Regions with a RGB-D Camera","authors":"Masayuki Sano, Kazuki Matsumoto, B. Thomas, H. Saito","doi":"10.1109/ISMAR.2015.43","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.43","url":null,"abstract":"Dynamic spatial augmented reality requires accurate real-time 3D pose information of the physical objects that are to be projected onto. Previous depth-based methods for tracking objects required strong features to enable recognition; making it difficult to estimate an accurate 6DOF pose for physical objects with a small set of recognizable features (such as a non-textured cube). We propose a more accurate method with fewer limitations for the pose estimation of a tangible object that has known planar faces and using depth data from an RGB-D camera only. In this paper, the physical object's shape is limited to cubes of different sizes. We apply this new tracking method to achieve dynamic projections onto these cubes. In our method, 3D points from an RGB-D camera are divided into a cluster of planar regions, and the point cloud inside each face of the object is fitted to an already-known geometric model of a cube. With the 6DOF pose of the physical object, SAR generated imagery is then projected correctly onto the physical object. The 6DOF tracking is designed to support tangible interactions with the physical object. We implemented example interactive applications with one or multiple cubes to show the capability of our method.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115710724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
[POSTER] A Comprehensive Interaction Model for AR Systems [海报]AR系统的综合交互模型
2015 IEEE International Symposium on Mixed and Augmented Reality Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.32
Mikel Salazar, Carlos Laorden, P. G. Bringas
{"title":"[POSTER] A Comprehensive Interaction Model for AR Systems","authors":"Mikel Salazar, Carlos Laorden, P. G. Bringas","doi":"10.1109/ISMAR.2015.32","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.32","url":null,"abstract":"In this extended poster, we present a model that aims to provide developers with an extensive and extensible set of context-aware interaction techniques, greatly facilitating the creation of meaningful AR-based user experiences. To provide a complete view of the model, we detail the different aspects that form its theoretical foundations, while also discussing several considerations for its correct implementation.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130721916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
[POSTER] Endoscopic Image Augmentation Reflecting Shape Changes in Cutting Procedures [海报]反映切割过程中形状变化的内窥镜图像增强
2015 IEEE International Symposium on Mixed and Augmented Reality Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.52
M. Nakao, Shota Endo, Keiho Imanishi, T. Matsuda
{"title":"[POSTER] Endoscopic Image Augmentation Reflecting Shape Changes in Cutting Procedures","authors":"M. Nakao, Shota Endo, Keiho Imanishi, T. Matsuda","doi":"10.1109/ISMAR.2015.52","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.52","url":null,"abstract":"This paper introduces a concept of endoscopic image augmentation that overlays shape changes to support cutting procedures. This framework handles the history of measured drill tip's location as a volume label, and visualizes the remains to be cut overlaid on the endoscopic image in real time. We performed a cutting experiment, and the efficacy of the cutting aid was evaluated among shape similarity, total moved distance of a cutting tool, and the required cutting time. The results of the experiments showed that cutting performance was significantly improved by the proposed framework.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125382856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Tiled Frustum Culling for Differential Rendering on Mobile Devices 移动设备上差分渲染的平铺截锥体剔除
2015 IEEE International Symposium on Mixed and Augmented Reality Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.13
K. Rohmer, Thorsten Grosch
{"title":"Tiled Frustum Culling for Differential Rendering on Mobile Devices","authors":"K. Rohmer, Thorsten Grosch","doi":"10.1109/ISMAR.2015.13","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.13","url":null,"abstract":"Mobile devices are part of our everyday life and allow augmented reality (AR) with their integrated camera image. Recent research has shown that even photorealistic augmentations with consistent illumination are possible. A method, achieving this first, distributed lighting computations and the extraction of the important light sources. To reach real-time frame rates on a mobile device, the number of these extracted light sources must be low, limiting the scope of possible illumination scenarios and the quality of shadows. In this paper, we show how to reduce the computational cost per light using a combination of tile-based rendering and frustum culling techniques tailored for AR applications. Our approach runs entirely on the GPU and does not require any precomputation. Without reducing the displayed image quality, we achieve up to 2.2× speedup for typical AR scenarios.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129824029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
[POSTER] Photo Billboarding: A Simple Method to Provide Clues that Relate Camera Views and a 2D Map for Mobile Pedestrian Navigation [海报]照片广告牌:一个简单的方法来提供线索,涉及相机视图和2D地图的移动行人导航
2015 IEEE International Symposium on Mixed and Augmented Reality Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.69
J. Watanabe, S. Kagami, K. Hashimoto
{"title":"[POSTER] Photo Billboarding: A Simple Method to Provide Clues that Relate Camera Views and a 2D Map for Mobile Pedestrian Navigation","authors":"J. Watanabe, S. Kagami, K. Hashimoto","doi":"10.1109/ISMAR.2015.69","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.69","url":null,"abstract":"This paper describes a mobile pedestrian navigation system that provides users with clues that help understanding spatial relationship between mobile camera views and a 2D map. The proposed method draws on the map upright billboards that correspond to the basal planes of past and current viewing frustums of the camera. The user can take photographs of arbitrary landmarks on the way to build billboards with photographs corresponding to them on the map. Subjective evaluation by eight participants showed that the proposed method offers improved experiences over navigation using a standard 2D map.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129858287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[POSTER] Automatic Visual Feedback from Multiple Views for Motor Learning [海报]多视角自动视觉反馈在运动学习中的应用
2015 IEEE International Symposium on Mixed and Augmented Reality Pub Date : 2015-09-29 DOI: 10.1109/ISMAR.2015.70
Dan Mikami, Mariko Isogawa, Kosuke Takahashi, Akira Kojima
{"title":"[POSTER] Automatic Visual Feedback from Multiple Views for Motor Learning","authors":"Dan Mikami, Mariko Isogawa, Kosuke Takahashi, Akira Kojima","doi":"10.1109/ISMAR.2015.70","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.70","url":null,"abstract":"A system providing visual feedback of a trainee's motions for effectively enhancing motor learning is presented. It provides feedback in synchronization with a reference motion from multiple view angles automatically with only a few seconds delay. Because the feedback is provided automatically, a trainee can obtain it without performing any operations while the memory of the motion is still clear. By employing features with low computational cost, the system achieves synchronized video feedback with four cameras connected to a consumer tablet PC.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130128622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信