A. MacWilliams, C. Sandor, M. Wagner, M. Bauer, G. Klinker, B. Bruegge
{"title":"Herding sheep: live system for distributed augmented reality","authors":"A. MacWilliams, C. Sandor, M. Wagner, M. Bauer, G. Klinker, B. Bruegge","doi":"10.1109/ISMAR.2003.1240695","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240695","url":null,"abstract":"In the past, architectures of augmented reality systems have been widely different and tailored to specific tasks. In this paper, we use the example of the SHEEP game to show how the structural flexibility of DWARF, our component-based distributed wearable augmented reality framework, facilitates a rapid prototyping and online development process for building, debugging and altering a complex, distributed, highly interactive AR system. The SHEEP system was designed to test and demonstrate the potential of tangible user interfaces which dynamically visualize, manipulate and control complex operations of many inter-dependent processes. SHEEP allows the users more freedom of action and forms of interaction and collaboration, following the tool metaphor that bundles software with hardware in units that are easily understandable to the user. We describe how we developed SHEEP, showing the combined evolution of framework and application, as well as the progress from rapid prototype to final demonstration system. The dynamic aspects of DWARF facilitated testing and allowed us to rapidly evaluate new technologies. SHEEP has been shown successfully at various occasions. We describe our experiences with these demos.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133192391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time workspace localisation and mapping for wearable robot","authors":"A. Davison, W. Mayol-Cuevas, D. W. Murray","doi":"10.1109/ISMAR.2003.1240737","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240737","url":null,"abstract":"This demo showcases breakthrough results in the general field real-time simultaneous localization and mapping (SLAM) using vision and in particular its vital role in enabling a wearable robot to assists its user. In our approach, a wearable active vision system (\"wearable robot\") is mounted at the shoulder. As the wearer moves around his environment, typically browsing a workspace in which a task must be completed, the robot acquires images continuously and generates a map of natural visual features on-the-fly while estimating its ego-motion.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122517002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"WireAR - legacy applications in augmented reality","authors":"Gerhard Reitmayr, M. Billinghurst, D. Schmalstieg","doi":"10.1109/ISMAR.2003.1240745","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240745","url":null,"abstract":"Current augmented reality (AR) applications require that the application software be written to support a specific AR interface set up. WireAR was developed to enable output from any OpenGL application to be viewed in an AR fashion. This enables the output from any legacy graphical or scientific visualization applications to be viewed in a collaborative AR setting. This demonstration shows how the output of standard desktop visualization programs can be embedded into an augmented reality experience.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128464142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Industrial augmented reality (IAR): challenges in design and commercialization of killer apps","authors":"Nassir Navab","doi":"10.1109/ISMAR.2003.1240682","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240682","url":null,"abstract":"Design of industrial AR applications has been oneof our main goals at Siemens Corporate Research since1997.As an industrial R&D laboratory, we pay particular attention to future commercialization of such applications. This paper is an extended abstract for aninvited talk at ISMAR 2003. In this talk, I take theexamples of AR applications we have been working onat Siemens Corporate Research (SCR) to discuss thechallenges the industrial AR research community needsto face in order to succeed.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130590745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Livingston, J. Swan, Joseph L. Gabbard, Tobias Höllerer, D. Hix, S. Julier, Y. Baillot, Dennis G. Brown
{"title":"Resolving multiple occluded layers in augmented reality","authors":"M. Livingston, J. Swan, Joseph L. Gabbard, Tobias Höllerer, D. Hix, S. Julier, Y. Baillot, Dennis G. Brown","doi":"10.1109/ISMAR.2003.1240688","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240688","url":null,"abstract":"A useful function of augmented reality (AR) systems is their ability to visualize occluded infrastructure directly in a user's view of the environment. This is especially important for our application context, which utilizes mobile AR for navigation and other operations in an urban environment. A key problem in the AR field is how to best depict occluded objects in such a way that the viewer can correctly infer the depth relationships between different physical and virtual objects. Showing a single occluded object with no depth context presents an ambiguous picture to the user. But showing all occluded objects in the environments leads to the \"Superman's X-ray vision\" problem, in which the user sees too much information to make sense of the depth relationships of objects. Our efforts differ qualitatively from previous work in AR occlusion, because our application domain involves far-field occluded objects, which are tens of meters distant from the user. Previous work has focused on near-field occluded objects, which are within or just beyond arm's reach, and which use different perceptual cues. We designed and evaluated a number of sets of display attributes. We then conducted a user study to determine which representations best express occlusion relationships among far-field objects. We identify a drawing style and opacity settings that enable the user to accurately interpret three layers of occluded objects, even in the absence of perspective constraints.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121312897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A step forward in manual welding: demonstration of augmented reality helmet","authors":"D. Aiteanu, B. Hillers, A. Gräser","doi":"10.1109/ISMAR.2003.1240734","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240734","url":null,"abstract":"A new welding helmet for the manual welding process has been developed. The welders working conditions are improved by augmenting the visual information before and during welding. The image is improved by providing a better view of the working area. An online quality assistant is available during welding, suggesting the correction of the guns position or pointing out welding errors, by analyzing the electrical welding parameters. An assembly advisor will suggest the assembly sequence, by displaying the type and the position of the following piece into the actual ensemble. In addition, an available online documentation of the welding process gives an opportunity to reduce the effort of post process quality assurance which often uses expensive X-ray investigations.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121395926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optical camouflage using retro-reflective projection technology","authors":"M. Inami, N. Kawakami, S. Tachi","doi":"10.1109/ISMAR.2003.1240754","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240754","url":null,"abstract":"This paper describes a kind of active camouflage system named optical camouflage. Optical camouflage uses the retro-reflective projection technology, a projection-based augmented-reality system composed of a projector with a small iris and a retro-reflective screen. The object that needs to be made transparent is painted or covered with retro-reflective material. Then a projector projects the background image on it making the masking object virtually transparent.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133293489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time augmented face","authors":"V. Lepetit, L. Vacchetti, D. Thalmann, P. Fua","doi":"10.1109/ISMAR.2003.1240753","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240753","url":null,"abstract":"This real-time augmented reality demonstration relies on our tracking algorithm described in V. Lepetit et al (2003). This algorithm considers natural feature points, and then does not require engineering of the environment. It merges the information from preceding frames in traditional recursive tracking fashion with that provided by a very limited number of reference frames. This combination results in a system that does not suffer from jitter and drift, and can deal with drastic changes. The tracker recovers the full 3D pose of the tracked object, allowing insertion of 3D virtual objects for augmented reality applications.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"47 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116302539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust visual tracking for non-instrumental augmented reality","authors":"Georg S. W. Klein, T. Drummond","doi":"10.1109/ISMAR.2003.1240694","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240694","url":null,"abstract":"This paper presents a robust and flexible framework for augmented reality which does not require instrumenting either the environment or the workpiece. A model-based visual tracking system is combined with rate gyroscopes to produce a system which can track the rapid camera rotations generated by a head-mounted camera, even if images are substantially degraded by motion blur. This tracking yields estimates of head position at video field rate (50Hz) which are used to align computer-generated graphics on an optical see-through display. Nonlinear optimisation is used for the calibration of display parameters which include a model of optical distortion. Rendered visuals are pre-distorted to correct the optical distortion of the display.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122070919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nobuyuki Matsushita, D. Hihara, Teruyuki Ushiro, Shinichi Yoshimura, J. Rekimoto, Yoshikazu Yamamoto
{"title":"ID CAM: a smart camera for scene capturing and ID recognition","authors":"Nobuyuki Matsushita, D. Hihara, Teruyuki Ushiro, Shinichi Yoshimura, J. Rekimoto, Yoshikazu Yamamoto","doi":"10.1109/ISMAR.2003.1240706","DOIUrl":"https://doi.org/10.1109/ISMAR.2003.1240706","url":null,"abstract":"An ID recognition system is described that uses optical beacons and a high-speed image sensor. The ID sensor captures a scene like an ordinary camera and recognizes the ID of a beacon emitted over a long distance. The ID recognition system has three features. The system is robust to changes in the optical environment, e.g. complete darkness, spotlights, and sunlight. It can recognize up to 255 multiple optical beacons simultaneously. Furthermore, it can recognize beacons even over a long distance, e.g. 40 m indoors and 20 m outdoors. Implementation and evaluation of this ID recognition system showed that a mobile augmented reality system can be achieved by combining this ID recognition system with a PDA and a wireless network.","PeriodicalId":296266,"journal":{"name":"The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130433905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}