{"title":"Time-Efficient Data Congregation Protocols on Wireless Sensor Network","authors":"Islam A. K. M. Muzahidul, K. Wada, Wei Chen","doi":"10.1109/ISUVR.2011.13","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.13","url":null,"abstract":"This paper focuses on time-efficient data congregation protocols on a Dynamic Cluster-based Wireless Sensor Network (CBWSN). The CBWSN is self-configurable and re-configurable, thus capable of performing two dynamic operations: node-move-in and node-move-out. In this paper, we propose two efficient congregation techniques for Dynamic CBWSN. In order to facilitate the efficient congregation protocols we propose an improved cluster-based structure. In this structure, we first construct a communication highway, and then improve the cluster-based structure to facilitate efficient congregation protocols such that the nodes of the network can perform inter and intra cluster communications efficiently. We also study the time complexity of the protocols.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128017622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahyoung Choi, Youngmin Park, Youngkyoon Jang, Changgu Kang, Woontack Woo
{"title":"mARGraphy: Mobile AR-based Dynamic Information Visualization","authors":"Ahyoung Choi, Youngmin Park, Youngkyoon Jang, Changgu Kang, Woontack Woo","doi":"10.1109/ISUVR.2011.21","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.21","url":null,"abstract":"We propose a mARGraphy, which visualizes information based on augmented reality (AR) technology. It provides the intuitive and interactive way to make users to understand dynamic 3D information in situ with highly relevant to the target. To show the effectiveness of our work, we introduce a traditional map viewer application. It recognizes region of a traditional map with object recognition and tracking method on a mobile platform. Then, it aggregates dynamic information obtained from database such as geographical features with temporal changes, and situational contexts. For the verification of this work, we observed how our system improves users' understanding information with mARGraphy through preliminary user study.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128306648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computer Vision for 3DTV and Augmented Reality","authors":"H. Saito","doi":"10.1109/ISUVR.2011.25","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.25","url":null,"abstract":"Recent computer vision technology makes innovative progress in 3D visual media industry. In this paper, I would like to introduce our approaches for making use of the computer vision technology in order to achieve innovative application systems of 3DTV and augmented reality. First, I would demonstrate the effectiveness of multiple viewpoint videos and depth videos in 3DTV applications, in which 3D shape reconstructions and view synthesis as computer vision technologies are used. Augmented reality is a method for presenting digital information over the real world with a see-through display. For such application of AR, real-time camera tracking is one of significant technology which is also based on a state-of-art in computer vision.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"04 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131097118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jooyoung Lee, Ralf Doerner, Johannes Luderschmidt, Hyungseok Kim, Jee-In Kim
{"title":"Collaboration between Tabletop and Mobile Device","authors":"Jooyoung Lee, Ralf Doerner, Johannes Luderschmidt, Hyungseok Kim, Jee-In Kim","doi":"10.1109/ISUVR.2011.18","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.18","url":null,"abstract":"Table-top is collaborative work space providinglarge touch screen which is capable of multi-touch interface. To make an extended collaborative work space withoutrestriction of time and place, one possible way is adoptingmobile devices. In this paper, we propose a way to monitor andcontrol the table-top using mobile devices. To monitor thelarge-screen table-top with a mobile device, it is necessary toconvert image into low-resolution for the device. We adopt a\"Focus & context\" image generation method for tabletopcontrol, to convert relatively large screen images into smallerone for mobile display. With this method, users are able tohave their own point of view for tabletop-based collaborationand also to have extended work spaces by adopting remotecontrol. We prove the result by conducting several experiments.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130972661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PhoneGuide: Adaptive Image Classification for Mobile Museum Guidance","authors":"O. Bimber, Erich Bruns","doi":"10.1109/ISUVR.2011.12","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.12","url":null,"abstract":"This paper summarizes the various components of our mobile museum guidance system PhoneGuide. It explains how practically viable object recognition rates can be achieved under realistic conditions using adaptive image classification.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"926 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114355377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effect of Active and Passive Haptic Sensory Information on Memory for 2D Sequential Selection Task","authors":"Hojin Lee, Gabjong Han, In Lee, Sunghoon Yim, Kyungpyo Hong, Seungmoon Choi","doi":"10.1109/ISUVR.2011.24","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.24","url":null,"abstract":"This paper introduces an education system for typical assembly procedures that provides various haptic sensory information including active and passive haptic feedbacks. Using the system, we implemented four kinds of training methods and experimentally evaluated their performances in terms of short-term and long-term memory over the task. In results, active haptic guidance showed beneficial effects on the short-term memory. In contrast, passive guidance showed the worst performance and even degraded the efficiency of short-term memory. No training methods resulted in noticeable improvements for the long-term memory performance.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131516964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimation of Illuminants for Plausible Lighting in Augmented Reality","authors":"Seokjun Lee, Soon Ki Jung","doi":"10.1109/ISUVR.2011.17","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.17","url":null,"abstract":"This paper presents a practical method to estimate the positions of light sources in real environment, using a mirror sphere placed on a known natural marker. For the stable results of static lighting, we take the multiple images around a sphere and estimate the principal light directions of the vector clusters for each light source in running-time. We also estimate the moving illuminant for changes of the scene illumination, and augment the virtual objects onto the real image with the proper shading and shadows. Some experimental results show that the proposed method produces plausible AR visualization in real time.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132761179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Graphical Menus Using a Mobile Phone for Wearable AR Systems","authors":"Hyeongmook Lee, Dongchul Kim, Woontack Woo","doi":"10.1109/ISUVR.2011.23","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.23","url":null,"abstract":"In this paper, we explore the design of various types of graphical menus via a mobile phone for use in a wearable augmented reality system. For efficient system control, locating menus is vital. Based on previous relevant work, we determine display-, manipulator- and target-referenced menu placement according to focusable elements within a wearable augmented reality system. Moreover, we implement and discuss three menu techniques using a mobile phone with a stereo head-mounted display.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124473718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Hill, Evan Barba, B. MacIntyre, Maribeth Gandy Coleman, Brian Davidson
{"title":"Mirror Worlds: Experimenting with Heterogeneous AR","authors":"A. Hill, Evan Barba, B. MacIntyre, Maribeth Gandy Coleman, Brian Davidson","doi":"10.1109/ISUVR.2011.28","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.28","url":null,"abstract":"Until recently, most content on the Internet has not been explicitly tied to specific people, places or things. However, content is increasingly being geo-coded and semantically labeled, making explicit connections between the physical world around us and the virtual world in cyberspace. Most augmented reality systems simulate a portion of the physical world, for the purposes of rendering a hybrid scene around the user. We have been experimenting with approaches to terra-scale, heterogeneous augmented reality mirror worlds, to unify these two worlds. Our focus has been on the authoring and user-experience, for example allowing ad-hoc transition between augmented and virtual reality interactions for multiple co-present users. This form of ubiquitous virtual reality raises several research questions involving the functional requirements, user affordances and relevant system architectures for these mirror worlds. In this paper, we describe our experiments with two mirror world systems and some lessons learned about the limitations of deploying these systems using massively multiplayer and dedicated game engine technologies.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115913856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Barcode-Assisted Planar Object Tracking Method for Mobile Augmented Reality","authors":"Nohyoung Park, Wonwoo Lee, Woontack Woo","doi":"10.1109/ISUVR.2011.20","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.20","url":null,"abstract":"In this paper, we propose a planar target tracking method that exploits a barcode containing information about a target. Our method combines both barcode detection and natural feature tracking methods to track a planar object efficiently on mobile devices. A planar target is detected by recognizing the barcode located near the target, and the target's keypoints are tracked in video sequences. We embed the information related to a planar object into the barcode, and the information is used to limit image regions to perform keypoint matching between consecutive frames. We show how to detect a barcode robustly and what information is embedded for efficient tracking. Our detection method runs at 30 fps on modern mobile devices, and it can be used for mobile augmented reality applications using planar targets.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130040014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}