{"title":"MTMR: A conceptual interior design framework integrating Mixed Reality with the Multi-Touch tabletop interface","authors":"Dong Wei, S. Zhou, Du Xie","doi":"10.1109/ISMAR.2010.5643606","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643606","url":null,"abstract":"This paper introduces a conceptual interior design framework - Multi-Touch Mixed Reality (MTMR), which integrates mixed reality with the multi-touch tabletop interface, to provide an intuitive and efficient interface for collaborative design and an augmented 3D view to users at the same time. Under this framework, multiple designers can carry out design work simultaneously on the top view displayed on the tabletop, while live video of the ongoing design work is captured and augmented by overlaying virtual 3D furniture models to their 2D virtual counterparts, and shown on a vertical screen in front of the tabletop. Meanwhile, the remote client's camera view of the physical room is augmented with the interior design layout in real time, that is, as the designers place, move, and modify the virtual furniture models on the tabletop, the client sees the corresponding life-size 3D virtual furniture models residing, moving, and changing in the physical room through the camera view on his/her screen. By adopting MTMR, which we argue may also apply to other kinds of collaborative work, the designers can expect a good working experience in terms of naturalness and intuitiveness, while the client can be involved in the design process and view the design result without moving around heavy furniture. By presenting MTMR, we hope to provide reliable and precise freehand interactions to mixed reality systems, with multi-touch inputs on tabletop interfaces.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114954617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taichi Yoshida, M. Tsukadaira, Asako Kimura, F. Shibata, H. Tamura
{"title":"Various tangible devices suitable for mixed reality interactions","authors":"Taichi Yoshida, M. Tsukadaira, Asako Kimura, F. Shibata, H. Tamura","doi":"10.1109/ISMAR.2010.5643608","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643608","url":null,"abstract":"In this paper, we present various novel tangible devices suitable for interactions in a mixed reality (MR) environment. They are aimed at making the best use of the features of MR, which allows users to touch or handle both virtual and physical objects. Furthermore, we consider usability and intuitiveness as important characteristics of the interface, and thus designed our devices to imitate traditional tools and help users understand their use.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115554469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Camera motion tracking in a dynamic scene","authors":"Jung-Jae Yu, Jae-Hean Kim","doi":"10.1109/ISMAR.2010.5643609","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643609","url":null,"abstract":"To insert a virtual object into a real image, the position of the object must appear seamlessly as the camera moves. This requires camera tracking with estimations of all internal and external parameters in each frame with an adequate degree of stability to ensure negligible visible drift between the real and virtual elements. In the post production of film, matchmoving software based on SfM is typically used in the camera tracking process. However, most of this type of software fails when attempting to track the camera in a dynamic scene in which a moving foreground object such as a real actor occupies a large part of the background. Therefore, this study proposes a camera tracking system that uses an auxiliary camera to estimate the motion of the main shooting camera and 3D position of background features in a dynamic scene. A novel reconstruction and connection method was developed for feature tracks that are occluded by a foreground object. Experimentation with a 2K sequence demonstrated the feasibility of the proposed method.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128463643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Smart Glasses: An open environment for AR apps","authors":"Martin Kurze, Axel Roselius","doi":"10.1109/ISMAR.2010.5643622","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643622","url":null,"abstract":"We present an architecture [fig. 1] and runtime environment for mobile Augmented Reality applications. The architecture is based on a plugin-concept on the device, a set of basic functionalities available for all apps and a cloud-oriented processing approach. As a first running sample app, we show a face recognition service running on amobile phone, conventional wearable displays and upcoming see-through - goggles. We invite interested 3rd parties to try out the environment, face recognition app and platform.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124006803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
O. Hayashi, Kazuhiro Kasada, Takuji Narumi, T. Tanikawa, M. Hirose
{"title":"Digital Diorama system for museum exhibition","authors":"O. Hayashi, Kazuhiro Kasada, Takuji Narumi, T. Tanikawa, M. Hirose","doi":"10.1109/ISMAR.2010.5643582","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643582","url":null,"abstract":"In this paper, we proposed the Digital Diorama system to convey background information vividly. The system superimposes computer generated diorama scene reconstructed from related image/video materials on real exhibits. In order to switch and superimpose real exhibits and past photos seamlessly, we implement a matching system for estimating the camera position where photos are taken. By applying this subsystem to 26 past photos about the steam locomotive exhibit, we succeeded in estimating their camera position. Thus, we implement and install a prototype system at estimated position to superimposing virtual scene and real exhibit in the Railway Museum.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126468848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Karlekar, S. Zhou, W. Lu, Loh Zhi Chang, Y. Nakayama, Daniel Hii
{"title":"Positioning, tracking and mapping for outdoor augmentation","authors":"J. Karlekar, S. Zhou, W. Lu, Loh Zhi Chang, Y. Nakayama, Daniel Hii","doi":"10.1109/ISMAR.2010.5643567","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643567","url":null,"abstract":"This paper presents a novel approach for user positioning, robust tracking and online 3D mapping for outdoor augmented reality applications. As coarse user pose obtained from GPS and orientation sensors is not sufficient for augmented reality applications, sub-meter accurate user pose is then estimated by a one-step silhouette matching approach. Silhouette matching of the rendered 3D model and camera data is carried out with shape context descriptors as they are invariant to translation, scale and rotational errors, giving rise to a non-iterative registration approach. Once the user is correctly positioned, further tracking is carried out with camera data alone. Drifts associated with vision based approaches are minimized by combining different feature modalities. Robust visual tracking is maintained by fusing frame-to-frame and model-to-frame feature matches. Frame-to-frame tracking is accomplished with corner matching while edges are used for model-to-frame registration. Results from individual feature tracker are fused using a pose estimate obtained from an extended Kalman filter (EKF) and a weighted M-estimator. In scenarios where dense 3D models of the environment are not available, online 3D incremental mapping and tracking is proposed to track the user in unprepared environments. Incremental mapping prepares the 3D point cloud of the outdoor environment for tracking.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121795360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A practical multi-viewer tabletop autostereoscopic display","authors":"Gu Ye, A. State, H. Fuchs","doi":"10.1109/ISMAR.2010.5643563","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643563","url":null,"abstract":"This paper introduces a multi-user autostereoscopic tabletop display and its associated real-time rendering methods. Tabletop displays that support both multiple viewers and autostereoscopy have been extremely difficult to construct. Our new system is inspired by the “Random Hole Display” design [11] that modified the pattern of openings in a barrier mounted in front of a flat panel display from thin slits to a dense pattern of tiny, pseudo-randomly placed holes. This allows viewers anywhere in front of the display to see a different subset of the display's native pixels through the random-hole screen. However, a fraction of the visible pixels will be observable by more than a single viewer. Thus the main challenge is handling these “conflicting” pixels, which ideally must show different colors to each viewer. We introduce several solutions to this problem and describe in detail the current method of choice, a combination of color blending and approximate error diffusion, performing in real time in our GPU-based implementation. The easily reproducible design uses a pattern film barrier affixed to the display by means of a transparent polycarbonate layer spacer. We use a commercial optical tracker for viewers' locations and synthesize the appropriate image (or a stereoscopic image pair) for each viewer. The system supports graceful degradation with increasing number of simultaneous views, and graceful improvement as the number of views decreases.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127869626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michihiko Goto, Yuko Uematsu, H. Saito, S. Senda, A. Iketani
{"title":"Task support system by displaying instructional video onto AR workspace","authors":"Michihiko Goto, Yuko Uematsu, H. Saito, S. Senda, A. Iketani","doi":"10.1109/ISMAR.2010.5643554","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643554","url":null,"abstract":"This paper presents an instructional support system based on augmented reality (AR). This system helps a user to work intuitively by overlaying visual information in the same way of a navigation system. In usual AR systems, the contents to be overlaid onto real space are created with 3D Computer Graphics. In most cases, such contents are newly created according to applications. However, there are many 2D videos that show how to take apart or build electric appliances and PCs, how to cook, etc. Therefore, our system employs such existing 2D videos as instructional videos. By transforming an instructional video to display, according to the user's view, and by overlaying the video onto the user's view space, the proposed system intuitively provides the user with visual guidance. In order to avoid the problem that the display of the instructional video and the user's view may be visually confused, we add various visual effects to the instructional video, such as transparency and enhancement of contours. By dividing the instructional video into sections according to the operations to be carried out in order to complete a certain task, we ensure that the user can interactively move to the next step in the instructional video after a certain operation is completed. Therefore, the user can carry on with the task at his/her own pace. In the usability test, users evaluated the use of the instructional video in our system through two tasks: a task involving building blocks and an origami task. As a result, we found that a user's visibility improves when the instructional video is transformed to display according to his/her view. Further, for the evaluation of visual effects, we can classify these effects according to the task and obtain the guideline for the use of our system as an instructional support system for performing various other tasks.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134508390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A precise controllable projection system for projected virtual characters and its calibration","authors":"Jochen Ehnes","doi":"10.1109/ISMAR.2010.5643577","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643577","url":null,"abstract":"In this paper we describe a system to project virtual characters that shall live with us in the same environment. In order to project the characters' visual representations onto room surfaces we use a controllable projector.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117047955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Knecht, C. Traxler, O. Mattausch, W. Purgathofer, M. Wimmer
{"title":"Differential Instant Radiosity for mixed reality","authors":"Martin Knecht, C. Traxler, O. Mattausch, W. Purgathofer, M. Wimmer","doi":"10.1109/ISMAR.2010.5643556","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643556","url":null,"abstract":"In this paper we present a novel plausible realistic rendering method for mixed reality systems, which is useful for many real life application scenarios, like architecture, product visualization or edutainment. To allow virtual objects to seamlessly blend into the real environment, the real lighting conditions and the mutual illumination effects between real and virtual objects must be considered, while maintaining interactive frame rates (20–30fps). The most important such effects are indirect illumination and shadows cast between real and virtual objects. Our approach combines Instant Radiosity and Differential Rendering. In contrast to some previous solutions, we only need to render the scene once in order to find the mutual effects of virtual and real scenes. The dynamic real illumination is derived from the image stream of a fish-eye lens camera. We describe a new method to assign virtual point lights to multiple primary light sources, which can be real or virtual. We use imperfect shadow maps for calculating illumination from virtual point lights and have significantly improved their accuracy by taking the surface normal of a shadow caster into account. Temporal coherence is exploited to reduce flickering artifacts. Our results show that the presented method highly improves the illusion in mixed reality applications and significantly diminishes the artificial look of virtual objects superimposed onto real scenes.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129746381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}