K. Saitoh, Takashi Machida, K. Kiyokawa, H. Takemura
{"title":"A 2D-3D integrated interface for mobile robot control using omnidirectional images and 3D geometric models","authors":"K. Saitoh, Takashi Machida, K. Kiyokawa, H. Takemura","doi":"10.1109/ISMAR.2006.297810","DOIUrl":"https://doi.org/10.1109/ISMAR.2006.297810","url":null,"abstract":"This paper proposes a novel visualization and interaction technique for remote surveillance using both 2D and 3D scene data acquired by a mobile robot equipped with an omnidirectional camera and an omnidirectional laser range sensor. In a normal situation, telepresence with an egocentric-view is provided using high resolution omnidirectional live video on a hemispherical screen. As depth information of the remote environment is acquired, additional 3D information can be overlaid onto the 2D video image such as passable area and roughness of the terrain in a manner of video see-through augmented reality. A few functions to interact with the 3D environment through the 2D live video are provided, such as path-drawing and path-preview. Path-drawing function allows to plan a robot's path by simply specifying 3D points on the path on screen. Path- preview function provides a realistic image sequence seen from the planned path using a texture-mapped 3D geometric model in a manner of virtualized reality. In addition, a miniaturized 3D model is overlaid on the screen providing an exocentric view, which is a common technique in virtual reality. In this way, our technique allows an operator to recognize the remote place and navigate the robot intuitively by seamlessly using a variety of mixed reality techniques on a spectrum of Milgram's real-virtual continuum.","PeriodicalId":332844,"journal":{"name":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121116165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effective control of a car driver's attention for visual and acoustic guidance towards the direction of imminent dangers","authors":"M. Tönnis, G. Klinker","doi":"10.1109/ISMAR.2006.297789","DOIUrl":"https://doi.org/10.1109/ISMAR.2006.297789","url":null,"abstract":"In cars, augmented reality is becoming an interesting means to enhance active safety in the driving task. Guiding a driver's attention to an imminent danger somewhere around the car is a potential application. In a research project with the automotive industry, we are exploring different approaches towards alerting drivers to such dangers. First results were presented last year. We have extended two of these approaches. One uses AR to visualize the source of danger in the driver's frame of reference while the other one presents information in a bird's eye schematic map. Our extensions were the incorporation of a real head-up display, improved visual perception and acoustic support. Both schemes were evaluated both with and without 3D encoded sound. This paper reports on a user test in which 24 participants provided objective and subjective measurements. The results indicate that the AR-based three-dimensional presentation scheme with and without sound support systematically outperforms the bird's eye schematic map.","PeriodicalId":332844,"journal":{"name":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130595667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Notational-based prototyping of mixed interactions","authors":"Wafaa Abou Moussa, J. Jessel, E. Dubois","doi":"10.1109/ISMAR.2006.297818","DOIUrl":"https://doi.org/10.1109/ISMAR.2006.297818","url":null,"abstract":"Development of mixed reality systems is almost always following an ad-hoc process. The development cycle often turns out to be highly expensive and time consuming. This paper presents a new prototyping approach: a combination of model-based design and simulation based prototyping.","PeriodicalId":332844,"journal":{"name":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"30 19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114527401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmented reality based on estimation of defocusing and motion blurring from captured images","authors":"B. Okumura, M. Kanbara, N. Yokoya","doi":"10.1109/ISMAR.2006.297817","DOIUrl":"https://doi.org/10.1109/ISMAR.2006.297817","url":null,"abstract":"Photometric registration is as important as geometric registration to generate a seamless augmented reality scene. Especially the difference in image quality between a real image and virtual objects caused by defocusing and motion blurring in capturing a real scene image easily exhibits the seam between real and virtual worlds. To avoid this problem in video see-through augmented reality, it is necessary to simulate the optical system of camera when virtual objects are rendered. This paper proposes an image composition method for video see-through augmented reality, which is based on defocusing and motion blurring estimation from the captured real image and rendering of virtual objects with blur effects. In experiments, the effectiveness of the proposed method is confirmed by comparing a real image with virtual objects rendered by the proposed method.","PeriodicalId":332844,"journal":{"name":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115375493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Viewpoint stabilization for live collaborative video augmentations","authors":"Taehee Lee, Tobias Höllerer","doi":"10.1109/ISMAR.2006.297824","DOIUrl":"https://doi.org/10.1109/ISMAR.2006.297824","url":null,"abstract":"We present a method for stabilizing live video from a moving camera for the purpose of a tele-meeting, in which a participant with an AR view onto a shared canvas collaborates with a remote user. The AR view is established without markers and using no other tracking equipment than a head-worn camera. The remote user is allowed to directly annotate the local user's view in real time on a desktop or tablet PC. The planar homographies between the reference frame and the other following frames are maintained. In effect, both the local and remote participants can annotate the physical meeting space, the local AR user through physical interaction, the remote user through our stabilized video. When tracking is lost, the remote user can still continue annotating on a frozen video frame. We tested several small demo applications with this new form of transient AR collaboration that can be established easily, on a per need basis, and without complicated equipment or calibration requirements.","PeriodicalId":332844,"journal":{"name":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134423136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juri Platonov, Tim Hauke Heibel, Peter Meier, Bert Grollmann
{"title":"A mobile markerless AR system for maintenance and repair","authors":"Juri Platonov, Tim Hauke Heibel, Peter Meier, Bert Grollmann","doi":"10.1109/ISMAR.2006.297800","DOIUrl":"https://doi.org/10.1109/ISMAR.2006.297800","url":null,"abstract":"We present a solution for AR based repair guidance. This solution covers software as well as hardware related issues. In particular we developed a markerless CAD based tracking system which can deal with different illumination conditions during the tracking stage, partial occlusions and rapid motion. The system is also able to automatically recover from occasional tracking failures. On the hardware side the system is based on an off the shelf notebook, a wireless mobile setup consisting of a wide-angle video camera and an analog video transmission system. This setup has been tested with a monocular full-color video-see-through HMD and additionally with a monochrome optical-see-through HMD. Our system underwent several extensive test series under real industrial conditions and proved to be useful for different maintenance and repair scenarios.","PeriodicalId":332844,"journal":{"name":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124176640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive context-driven visualization tools for augmented reality","authors":"Erick Méndez, Denis Kalkofen, D. Schmalstieg","doi":"10.1109/ISMAR.2006.297816","DOIUrl":"https://doi.org/10.1109/ISMAR.2006.297816","url":null,"abstract":"In this article we present an interaction tool, based on the Magic Lenses technique, that allows a 3D scene to be affected dynamically given contextual information, for example, to support information filtering. We show how elements of a scene graph are grouped by context in addition to hierarchically, and, how this enables us to locally modify their rendering styles. This research has two major contributions, the use of context sensitivity with 3D Magic Lenses in a scene graph and the implementation of multiple volumetric 3D Magic Lenses for Augmented Reality setups. We have developed our tool for the Studierstube framework which allows us doing rapid prototyping of Virtual and Augmented Reality applications. Some application directions are shown throughout the paper. We compare our work with other methods, highlight strengths and weaknesses and finally discuss research directions for our work.","PeriodicalId":332844,"journal":{"name":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123215797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ronald T. Azuma, H. Neely, M. Daily, John R. Leonard
{"title":"Performance analysis of an outdoor augmented reality tracking system that relies upon a few mobile beacons","authors":"Ronald T. Azuma, H. Neely, M. Daily, John R. Leonard","doi":"10.1109/ISMAR.2006.297798","DOIUrl":"https://doi.org/10.1109/ISMAR.2006.297798","url":null,"abstract":"We describe and evaluate a new tracking concept for outdoor Augmented Reality. A few mobile beacons added to the environment correct errors in head-worn inertial and GPS sensors. We evaluate the accuracy through detailed simulation of many error sources. The most important parameters are the errors in measuring the beacon and user's head positions, and the geometric configuration of the beacons around the point to augment. Using Monte Carlo simulations, we identify combinations of beacon configurations and error parameters that meet a specified goal of 1 m net error at 100 m range.","PeriodicalId":332844,"journal":{"name":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122355869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LightSense: enabling spatially aware handheld interaction devices","authors":"A. Olwal","doi":"10.1109/ISMAR.2006.297802","DOIUrl":"https://doi.org/10.1109/ISMAR.2006.297802","url":null,"abstract":"The vision of spatially aware handheld interaction devices has been hard to realize. The difficulties in solving the general tracking problem for small devices have been addressed by several research groups and examples of issues are performance, hardware availability and platform independency. We present LightSense, an approach that employs commercially available components to achieve robust tracking of cell phone LEDs, without any modifications to the device. Cell phones can thus be promoted to interaction and display devices in ubiquitous installations of systems such as the ones we present here. This could enable a new generation of spatially aware handheld interaction devices that would unobtrusively empower and assist us in our everyday tasks.","PeriodicalId":332844,"journal":{"name":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117118667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tag detection algorithm for improving the instability problem of an augmented reality","authors":"Seok-Won Lee, Dong-Chul Kim, Do-Yoon Kim, T. Han","doi":"10.1109/ISMAR.2006.297828","DOIUrl":"https://doi.org/10.1109/ISMAR.2006.297828","url":null,"abstract":"Detection technology is a requirement for an Augmented Reality system. One of the problems with detection technology is the instability problem, which occurs when an obstacle occludes a tag while detecting the tag, and the augmented object suddenly disappears. We have proposed a corner detection algorithm to solve this instability problem. The key feature is that if the tag can recognize its position using its four corner cells despite the obstacle being present, then it can maintain its augmented object. We defined the corner case for all types of cases where the instability problem occurs in ARToolkit or ARTag. We have adapted our proposed algorithm to the corner case in ARToolkit, ARTag and ColorCode vision systems and have compared their false detection rates.","PeriodicalId":332844,"journal":{"name":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"205 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117204727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}