Laise Lima De Carvalho, C. Vidal, J. B. C. Neto, Suzana Matos França de Oliveira
{"title":"Dynamic Cloth Simulation: A Comparative Study of Explicit and Implicit Numerical Integration","authors":"Laise Lima De Carvalho, C. Vidal, J. B. C. Neto, Suzana Matos França de Oliveira","doi":"10.1109/SVR.2012.11","DOIUrl":"https://doi.org/10.1109/SVR.2012.11","url":null,"abstract":"Physically based cloth animation has gained much attention from researchers in the last two decades, due to the challenges of realism placed by the film and game industries, as well as by the applications of virtual reality and e-commerce. Despite the overwhelming achievements in this area, a deeper understanding of the numerical techniques involved in the simulations is still in order. This paper analyzes the behavior of some useful integration techniques, and tests them in three typical simulations of cloth animation.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133275926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alessandro Luiz Stamatto Ferreira, S. R. D. Santos, Leonardo Cunha de Miranda
{"title":"TrueSight A Pedestrian Navigation System Based in Automatic Landmark Detection and Extraction on Android Smartphone","authors":"Alessandro Luiz Stamatto Ferreira, S. R. D. Santos, Leonardo Cunha de Miranda","doi":"10.1109/SVR.2012.14","DOIUrl":"https://doi.org/10.1109/SVR.2012.14","url":null,"abstract":"From time to time someone gets lost and askhimself \"How do I get there?\" With the advent of the GPS this question can be answered. However due to difficulties such as lack of precision, possibility of inaccurate maps,network dependency, and cost lead the pursuit of analternative solution. In order to locate himself the person can use a different method: using a smartphone camera his position is recognized visually, based in environment references, and then an arrow pointing the right direction appears in a map in the display. This method was implemented in the application framework of Android,using OpenCV and its implementation of the SURFalgorithm. The final application is named TrueSight and westudy its viability and limitations. The authors conclude thata vision-based navigation system is viable, but database improvements and exhibition could make it better.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116464598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alessandro Diogo Brückheimer, M. Hounsell, A. V. Soares
{"title":"Dance2Rehab3D: A 3D Virtual Rehabilitation Game","authors":"Alessandro Diogo Brückheimer, M. Hounsell, A. V. Soares","doi":"10.1109/SVR.2012.30","DOIUrl":"https://doi.org/10.1109/SVR.2012.30","url":null,"abstract":"Keeping patients into long term therapy seem to be as beneficial as the therapy itself. The use of computers to achieve engagement and motivation has been sought as a medium that not only give entertainment but real therapy benefits. The use of some interaction devices (such as mouse) however is a limiting feature to some patients with motor disabilities. Existing camera-based games do not reason with the whole spectrum of movements required by therapy. Recent development and popularization of depth cameras made it possible to develop interfaces that can explore users' 3D movements with no device to hold. This paper presents a game-like virtual environment where controllable situations are generated and users limitations are considered in order to foster movements on an interesting and relaxed set of activities. The development has shown that a close collaboration between physiotherapists and computer scientists are mandatory in order to achieve a useful application.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129702062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual Table -- Teleporter: Image Processing and Rendering for Horizontal Stereoscopic Display","authors":"B. Madeira, L. Velho","doi":"10.1109/SVR.2012.31","DOIUrl":"https://doi.org/10.1109/SVR.2012.31","url":null,"abstract":"We describe a new architecture composed by software and hardware for displaying stereoscopic images over a horizontal surface. It works as a \"Virtual Table and Teleporter'', in the sense that virtual objects depicted over a table have the appearance of real objects. This system can be used for visualization and interaction. We propose two basic configurations: the Virtual Table, consisting of a single display surface, and the Virtual Teleporter, consisting of a pair of tables for image capture and display. The Virtual Table displays either 3D computer generated images or previously captured stereoscopic video and can be used for interactive applications. The Virtual Teleporter captures and transmits stereoscopic video from one table to the other and can be used for telepresence applications. In both configurations the images are properly deformed and displayed for horizontal 3D stereo. In the Virtual Teleporter, two cameras are pointed to the first table, capturing a stereoscopic image pair. These images are shown on the second table, that is in fact a stereoscopic display positioned horizontally. Many applications can benefit from this technology such as, virtual reality, games, teleconference and distance learning. We present some interactive applications, that we developed using this architecture.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121046418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Medeiros, E. R. Silva, Peter Dam, Rodrigo Pinheiro, Thiago Motta, Manuel E. Loaiza, A. Raposo
{"title":"A Case Study on the Implementation of the 3C Collaboration Model in Virtual Environments","authors":"Daniel Medeiros, E. R. Silva, Peter Dam, Rodrigo Pinheiro, Thiago Motta, Manuel E. Loaiza, A. Raposo","doi":"10.1109/SVR.2012.28","DOIUrl":"https://doi.org/10.1109/SVR.2012.28","url":null,"abstract":"Throughout the years many studies have explored the potential of Virtual Reality (VR) technologies to support collaborative work. However few studies looked into CSCW (Computer Supported Cooperative Work) collaboration models that could help VR systems improve the support for collaborative tasks. This paper analyzes the applicability of the 3C collaboration model as a methodology to model and define collaborative tools in the development of a collaborative virtual reality application. A case study will be presented to illustrate the selection and evaluation of different tools that aim to support the actions of communication, cooperation and coordination between users that interact in a virtual environment. The main objective of this research is to show that the criteria defined by the 3C model can be mapped as a parameter for the classification of interactive tools used in the development of collaborative virtual environments.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123132754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Almeida, Marina Atsumi Oikawa, Jordi Polo Carres, Jun Miyazaki, H. Kato, M. Billinghurst
{"title":"AR-based Video-Mediated Communication: A Social Presence Enhancing Experience","authors":"I. Almeida, Marina Atsumi Oikawa, Jordi Polo Carres, Jun Miyazaki, H. Kato, M. Billinghurst","doi":"10.1109/SVR.2012.4","DOIUrl":"https://doi.org/10.1109/SVR.2012.4","url":null,"abstract":"Video-mediated communication systems attempt toprovide users with a channel that could bring out the \"feeling\"of face-to-face communication. Among the many qualities thesesystems aim for, a high level of Social Presence isunquestionably a desirable one; however, little effort has beenmade to improve upon the user's perception of \"presence\". Wepropose an AR approach to enhance social presence for video mediatedsystems by allowing one user to be present in theother user's video image. We conducted a preliminary pilotstudy with 10 participants coupled in 5 pairs to evaluate oursystem and compare with the traditional video-chat setup.Results indicated that our system has higher degree of socialpresence compared to traditional video-chat systems. Thisconclusion was supported by the positive feedback from thesubjects.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133539433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Peer-to-Peer Multicast Architecture for Supporting Collaborative Virtual Environments (CVEs) in Medicine","authors":"P. V. F. Paiva, L. Machado, J. Oliveira","doi":"10.1109/SVR.2012.7","DOIUrl":"https://doi.org/10.1109/SVR.2012.7","url":null,"abstract":"Collaborative Virtual Environments (CVEs) can improve the way remote users interact with one another while learning or training skills on a given task. One CVE's application is the possibility of simulating medical procedures in which a group of remote users can train and interact simultaneously. It is important that networking issues and performance evaluation of CVEs allows us to understand how such systems can work in the Internet, as well as the requirements for multisensorial and real-time data. Thus, this paper discloses implementation issues of a peer-to-peer multicast network architecture on the collaborative module of the CyberMed VR framework. The multicast protocol is known to provide better scalability and decrease the use of bandwidth on CVEs, allowing better Quality of Experience (QoE). Finally it presents the results of a performance evaluation experiment.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123544305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fábio Rodrigues, F. Sato, L. C. Botega, Allan Oliveira
{"title":"Integration Framework of Augmented Reality and Tangible Interfaces for Enhancing the User Interaction","authors":"Fábio Rodrigues, F. Sato, L. C. Botega, Allan Oliveira","doi":"10.1109/SVR.2012.13","DOIUrl":"https://doi.org/10.1109/SVR.2012.13","url":null,"abstract":"The integration of post-wimp computer interfaces arises as an alternative to meet the individual limitations of each modality, considering both interaction components and the feedbacks to users. Tangible interfaces can present restrictions referring to physical space on tabletop architectures, which limits the manipulation of objects and deprecates the interactive process. Hence, this paper proposes the integration of techniques of mobile Augmented Reality with tabletop tangible architecture for blending real and virtual components on its surface, aiming to make the interactive process richer, seamless and more complete.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127390326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Santos, Diego Lemos, Jorge Eduardo Falcao Lindoso, V. Teichrieb
{"title":"Real Time Ray Tracing for Augmented Reality","authors":"A. Santos, Diego Lemos, Jorge Eduardo Falcao Lindoso, V. Teichrieb","doi":"10.1109/SVR.2012.8","DOIUrl":"https://doi.org/10.1109/SVR.2012.8","url":null,"abstract":"This paper introduces a novel graphics rendering pipeline applied to augmented reality, based on a real time ray tracing paradigm. Ray tracing techniques process pixels independently from each other, allowing an easy integration with image-based tracking techniques, contrary to traditional projection-based rasterization graphics systems, e.g. OpenGL. Therefore, by associating our highly optimized ray tracer with an augmented reality framework, the proposed pipeline is capable to provide high quality rendering with real time interaction between virtual and real objects, such as occlusions, soft shadows, custom shaders, reflections and self-reflections, some of these features only available in our rendering pipeline. As proof of concept, we present a case study with the ARToolKitPlus library and the Microsoft Kinect hardware, both integrated in our pipeline. Furthermore, we show the performance and visual results in high definition of the novel pipeline on modern graphics cards, presenting occlusion and recursive reflection effects between virtual and real objects without the latter ones needing to be previously modeled when using Kinect. Furthermore, an adaptive soft shadow sampling algorithm for ray tracing is presented, generating high quality shadows in real time for most scenes.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121790406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. A. N. S. Silva, T. Araújo, L. Dantas, Yúrika Sato Nóbrega, H. R. G. Lima, Guido Lemos de Souza Filho
{"title":"FleXLIBRAS: Description and Animation of Signs in Brazilian Sign Language","authors":"D. A. N. S. Silva, T. Araújo, L. Dantas, Yúrika Sato Nóbrega, H. R. G. Lima, Guido Lemos de Souza Filho","doi":"10.1109/SVR.2012.25","DOIUrl":"https://doi.org/10.1109/SVR.2012.25","url":null,"abstract":"Deaf communicate naturally through gestural and visual languages called sign languages. These languages are natural, composed by lexical items called signs and have their own vocabulary and grammar. In this paper, we propose the definition of a formal, expressive and consistent language to describe signs in Brazilian Sign Language (LIBRAS). This language allows the definition of all parameters of a sign and consequently the generation of an animation for this sign. In addition, the proposed language is flexible in the sense that new parameters (or phonemes) can be defined “on the fly”. In order to provide a case study for the proposed language, a system for collaborative construction of a LIBRAS vocabulary based on 3D humanoids avatars was also developed. Some tests with Brazilian deaf users were also performed to evaluate the proposal.","PeriodicalId":319713,"journal":{"name":"2012 14th Symposium on Virtual and Augmented Reality","volume":"215 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114849433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}