{"title":"Fiducial-less 3-D object tracking in AR systems based on the integration of top-down and bottom-up approaches and automatic database addition","authors":"T. Okuma, T. Kurata, K. Sakaue","doi":"10.1109/ISMAR.2003.1240710","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240710","url":null,"abstract":"We propose a novel fiducial-less 3-D object tracking method. Our method consists of three components: 1) bottom-up approach (BUA), 2) top-down approach (TDA), and 3) automatic database addition (ADA). An experimental result shows an accuracy and robustness of our method.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122873800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D reconstruction of stereo images for interaction between real and virtual worlds","authors":"Hansung Kim, Seung-Jun Yang, K. Sohn","doi":"10.1109/ISMAR.2003.1240700","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240700","url":null,"abstract":"Mixed reality is different from the virtual reality in that users can feel immersed in a space which is composed of not only virtual but also real objects. Thus, it is essential to realize seamless integration and interaction of the virtual and real worlds. We need depth information of the real scene to synthesize the real and virtual objects. We propose a two-stage algorithm to find smooth and precise disparity vector fields with sharp object boundaries in a stereo image pair for depth estimation. Hierarchical region-dividing disparity estimation increases the efficiency and the reliability of the estimation process, and a shape-adaptive window provides high reliability of the fields around the object boundary region. At the second stage, the vector fields are regularized with a energy model which produces smooth fields while preserving their discontinuities resulting from the object boundaries. The vector fields are used to reconstruct 3D surface of the real scene. Simulation results show that the proposed algorithm provides accurate and spatially correlated disparity vector fields in various kinds of images, and synthesized 3D models produce natural space where the virtual objects interact with the real world as if they are in the same world.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133600992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Live mixed-reality 3D video in soccer stadium","authors":"T. Koyama, I. Kitahara, Y. Ohta","doi":"10.1109/ISMAR.2003.1240701","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240701","url":null,"abstract":"This paper proposes a method to realize a 3D video display system that can capture video from multiple cameras, reconstruct 3D models and transmit 3D video data in real time. We represent a target object with a simplified 3D model consisting of a single plane and a 2D texture extracted from multiple cameras. This 3D model is simple enough to be transmitted via a network. We have developed a prototype system that can capture multiple videos, reconstruct 3D models, transmit the models via a network, and display 3D video in real time. A 3D video of a typical soccer scene that includes a dozen players was processed at 26 frames per second.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"118 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133735539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tomoyasu Nakatsuru, Y. Yokokohji, D. Eto, T. Yoshikawa
{"title":"Image overlay on optical see-through displays for vehicle navigation","authors":"Tomoyasu Nakatsuru, Y. Yokokohji, D. Eto, T. Yoshikawa","doi":"10.1109/ISMAR.2003.1240723","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240723","url":null,"abstract":"In this paper, we propose a method for image overlay on the front glass of a vehicle to navigate a driver to a desired destination. By overlaying the navigation information on the front glass, the driver need not gaze at the console panel. Therefore, accidents caused by gazing at console panel could be reduced. To overlay the image accurately on the target object through the front glass, both the vehicle's position/orientation and the driver's position/information are estimated by vision-based tracking and measuring angular velocities of the vehicle's wheels. Experimental results show the validity of the proposed method.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115384669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Pettersen, J. Pretlove, C. Skourup, Torbjorn Engedal, T. Løkstad
{"title":"Augmented reality for programming industrial robots","authors":"T. Pettersen, J. Pretlove, C. Skourup, Torbjorn Engedal, T. Løkstad","doi":"10.1109/ISMAR.2003.1240739","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240739","url":null,"abstract":"Existing practice for programming robots involves teaching it a sequence of waypoints in addition to process-related events, which defines the complete robot path. The programming process is time consuming, error prone and, in most cases, requires several iterations before the program quality is acceptable. By introducing augmented reality technologies in this programming process, the operator gets instant real-time, visual feedback of a simulated process in relation to the real object, resulting in reduced programming time and increased quality of the resulting robot program. This paper presents a demonstrator of a standalone augmented reality pilot system allowing an operator to program robot waypoints and process specific events related to paint applications. During the programming sequence, the system presents visual feedback of the paint result for the operator, allowing him to inspect the process result before the robot has performed the actual task.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125817080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Regenbrecht, Claudia Ott, M. Wagner, T. Lum, P. Kohler, W. Wilke, Erich Mueller
{"title":"An augmented virtuality approach to 3D videoconferencing","authors":"H. Regenbrecht, Claudia Ott, M. Wagner, T. Lum, P. Kohler, W. Wilke, Erich Mueller","doi":"10.1109/ISMAR.2003.1240725","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240725","url":null,"abstract":"This paper describes the concept, prototypical implementation, and usability evaluation of an augmented virtuality (AV) based videoconferencing (VC) system: \"cAR/PE!\". We present a first solution which allows three participants at different locations to communicate over a network in an environment simulating a traditional face-to-face meeting. Integrated into the AV environment are live video streams of the participants spatially arranged around a virtual table, a large virtual presentation screen for 2D display and application sharing, and 3D geometry (models) within the room and on top of the table.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129230669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Cavazza, Olivier Martin, Fred Charles, Xavier Marichal, Steven J. Mead
{"title":"User interaction in mixed reality interactive storytelling","authors":"M. Cavazza, Olivier Martin, Fred Charles, Xavier Marichal, Steven J. Mead","doi":"10.1109/ISMAR.2003.1240732","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240732","url":null,"abstract":"In this paper, we describe a mixed reality system based on a \"magic mirror\" model, in which the user's image is captured in real time by a video camera, extracted from his/her background and mixed with a 3D graphic model of a virtual image including the synthetic characters taking part in the story. The resulting image is projected on a large screen facing the user, who sees his/her own image embedded in the virtual stage with the synthetic actors. The graphic component of the mixed reality world is based on a game engine, Unreal Tournament 2003. This engine not only performs graphic rendering and character animation but incorporates a new version of our previously described storytelling engine. A single 2D camera facing the user analyses the image in real-time by segmenting the user's contours.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122008468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Jellyfish party: blowing soap bubbles in mixed reality space","authors":"Yasuhiro Okuno, Hiroyuki Kakuta, Tomohiko Takayama, Kazuhiro Asai","doi":"10.1109/ISMAR.2003.1240759","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240759","url":null,"abstract":"This paper describes a mixed reality installation named Jellyfish Party, for enjoying playing with soap bubbles. A special feature of this installation is the use of a spirometer sensor to measure the amount and speed of expelled air used to blow virtual soap bubbles.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121199375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stereo depth assessment experiment for microscope-based surgery","authors":"R. Lapeer, A. Tan, A. Linney, G. Alusi","doi":"10.1109/ISMAR.2003.1240716","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240716","url":null,"abstract":"We present experimental data on the use of autostereoscopic displays as complementary visualization aids to the surgical stereo microscope for augmented reality surgery. Five experts in the use of the microscope, and five non-experts, performed a depth experiment to assess stereo cues as provided by two autostereoscopic displays (DTI 2015XLS Virtual Window and SHARP micro-optic twin), the surgical microscope and the \"naked\" eye.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127097296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Ikeuchi, A. Nakazawa, K. Hasegawa, Takeshi Oishi
{"title":"The great buddha project: modeling cultural heritage for VR systems through observation","authors":"K. Ikeuchi, A. Nakazawa, K. Hasegawa, Takeshi Oishi","doi":"10.1109/ISMAR.2003.1240683","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240683","url":null,"abstract":"This paper overviews our research on digitalpreservation of cultural assets and digital restoration oftheir original appearance. Geometric models are digitallyachieved through a pipeline consisting of scanning,registering and merging multiple range images. We havedeveloped a robust simultaneous registration method andan efficient and robust voxel-based integration method. Onthe geometric models created, we have to align textureimages acquired from a color camera. We have developedtwo texture mapping methods. In an attempt to restore theoriginal appearance of historical heritage objects, we havesynthesized several buildings and statues using scanneddata and literature survey with advice from experts.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127228413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}