{"title":"HeroMirror interactive: a gesture controlled augmented reality gaming experience","authors":"Tamás Matuszka, Ferenc Czuczor, Zoltán Sóstai","doi":"10.1145/3306214.3338554","DOIUrl":"https://doi.org/10.1145/3306214.3338554","url":null,"abstract":"Appropriately chosen user interfaces are essential parts of immersive augmented reality experiences. Regular user interfaces cannot be efficiently used for interactive, real-time augmented reality applications. In this study, a gesture controlled educational gaming experience is described where gesture recognition relies on deep learning methods. Our implementation is able to replace a depth-camera based gesture recognition system using conventional camera while ensuring the same level of recognition accuracy.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123923880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Arque: artificial biomimicry-inspired tail for extending innate body functions","authors":"Junichi Nabeshima, M. Y. Saraiji, K. Minamizawa","doi":"10.1145/3306214.3338573","DOIUrl":"https://doi.org/10.1145/3306214.3338573","url":null,"abstract":"For most mammals and vertebrate animals, tail plays an important role for their body providing variant functions to expand their mobility, or as a limb that allows manipulation and gripping. In this work, Arque, we propose an artificial biomimicry-inspired anthropomorphic tail to allow us alter our body momentum for assistive, and haptic feedback applications. The proposed tail consists of adjacent joints with a spring-based structure to handle shearing and tangential forces, and allow managing the length and weight of the target tail. The internal structure of the tail is driven by four pneumatic artificial muscles providing the actuation mechanism for the tail tip. Here we highlight potential applications for using such prosthetic tail as an extension of human body to provide active momentum alteration in balancing situations, or as a device to alter body momentum for full-body haptic feedback scenarios.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127055386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Voxel printing using procedural art-directable technologies","authors":"T. Robinson, W. Furneaux","doi":"10.1145/3306214.3338555","DOIUrl":"https://doi.org/10.1145/3306214.3338555","url":null,"abstract":"A procedural art-directable workflow is developed for voxel 3D printing using existing digital effects technologies. Customised for the Stratasys J750's unique materials, the system produces large-scale prosthetic eyes as a case study for film and display work.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125146087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Margaret E. Cook, Amber Ackley, Karla I. Chang Gonzalez, A. Payne, J. Seo, Caleb Kicklighter, M. Pine, Timothy McLaughlin
{"title":"InNervate immersion: case study of dynamic simulations in AR/VR environments for learning muscular innervation","authors":"Margaret E. Cook, Amber Ackley, Karla I. Chang Gonzalez, A. Payne, J. Seo, Caleb Kicklighter, M. Pine, Timothy McLaughlin","doi":"10.1145/3306214.3338580","DOIUrl":"https://doi.org/10.1145/3306214.3338580","url":null,"abstract":"We present a collaborative immersive technology effort, InNervate AR and InNervate VR. These applications meet the need to expand on existing anatomy education platforms by implementing a more dynamic and interactive user interface. This user interface allows for exploration of the complex relationship between motor nerve deficits and their effects upon the canine anatomy's ability to produce movement. Preliminary AR user studies provided us with positive feedback in the quality of learning. The studies show that the dynamic touch interactions in AR definitely benefit students' critical reasoning and spatial visualization in learning motor nerve and muscle relationships. However, users seek a more immersive VR-based learning environment, without the distractions that an AR experience may offer. Based on this feedback, a VR version of this learning experience was created. Preliminary responses show that users are satisfied with this VR environment which allows them to manipulate and control the anatomical content with full-body interactions.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127098975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Puppeteered rain: interactive illusion of levitating water drops by position-dependent strobe projection","authors":"S. Kagami, Kotone Higuchi, K. Hashimoto","doi":"10.1145/3306214.3338603","DOIUrl":"https://doi.org/10.1145/3306214.3338603","url":null,"abstract":"Light projection onto falling water produces distinct and impressive experience which is suitable for entertainment and advertising installations in public spaces [Barnum et al. 2010; Eitoku et al. 2006]. One of popular and classical techniques used in illuminating water for such purposes is strobe lighting, which presents optical illusion of levitating --- or slowly falling or rising --- water drops depending on the relation between water dropping and strobe lighting frequencies (e.g. [Pevnick 1981; Rosenthal 1984]).","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"119 1-2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132062607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandros Lattas, Mingqian Wang, S. Zafeiriou, A. Ghosh
{"title":"Multi-view facial capture using binary spherical gradient illumination","authors":"Alexandros Lattas, Mingqian Wang, S. Zafeiriou, A. Ghosh","doi":"10.1145/3306214.3338611","DOIUrl":"https://doi.org/10.1145/3306214.3338611","url":null,"abstract":"High resolution facial capture has received significant attention in computer graphics due to its application in the creation of photorealistic digital humans for various applications ranging from film and VFX to games and VR. Here, the state of the art method for high quality acquisition of facial geometry and reflectance employs polarized spherical gradient illumination [Ghosh et al. 2011; Ma et al. 2007]. The technique has had a significant impact in facial capture for film VFX, recently receiving a Technical Achievement award from the Academy of Motion Picture Arts and Sciences [Aca 2019]. However, the method imposes a few constraints due to the employment of polarized illumination, and requires the camera viewpoints to be located close to the equator of the LED sphere for appropriate diffuse-specular separation for multiview capture [Ghosh et al. 2011]. The employment of polarization for reflectance separation also reduces the amount of light available for exposures and requires double the number of photographs (in cross and parallel polarization states), increasing the capture time and the number of photographs required for each face scan.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132984163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep-ChildAR bot: educational activities and safety care augmented reality system with deep-learning for preschool","authors":"Yoonjung Park, Hyocheol Ro, T. Han","doi":"10.1145/3306214.3338589","DOIUrl":"https://doi.org/10.1145/3306214.3338589","url":null,"abstract":"We propose a projection-based augmented reality (AR) robot system that provides pervasive support for the education and safety of preschoolers via a deep learning framework. This system can utilize real-world objects as metaphors for educational tools by performing object detection based on deep learning in real-time, and it can help recognize the dangers of real-world objects that may pose risks to children. We designed the system in a simple and intuitive way to provide user-friendly interfaces and interactions for children. Children's experiences through the proposed system can improve their physical, cognitive, emotional, and thinking abilities.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"342 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133039939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Photon","authors":"Tzu-Chieh Chang, M. Ouhyoung","doi":"10.1145/3306214.3338586","DOIUrl":"https://doi.org/10.1145/3306214.3338586","url":null,"abstract":"To develop a graphics project with ease and confidence, the reliability and extensibility of the underlying framework are essential. While there are existing options, e.g., pbrt-v3 [Pharr et al. 2016] and Mitsuba [Jakob 2010], they either focus on education or not being updated for a long time. We would like to present an alternative solution named Photon.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114371350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew DuVall, John Flynn, M. Broxton, P. Debevec
{"title":"Compositing light field video using multiplane images","authors":"Matthew DuVall, John Flynn, M. Broxton, P. Debevec","doi":"10.1145/3306214.3338614","DOIUrl":"https://doi.org/10.1145/3306214.3338614","url":null,"abstract":"We present a variety of new compositing techniques using Multi-plane Images (MPI's) [Zhou et al. 2018] derived from footage shot with an inexpensive and portable light field video camera array. The effects include camera stabilization, foreground object removal, synthetic depth of field, and deep compositing. Traditional compositing is based around layering RGBA images to visually integrate elements into the same scene, and often requires manual 2D and/or 3D artist intervention to achieve realism in the presence of volumetric effects such as smoke or splashing water. We leverage the newly introduced DeepView solver [Flynn et al. 2019] and a light field camera array to generate MPIs stored in the DeepEXR format for compositing with realistic spatial integration and a simple workflow which offers new creative capabilities. We demonstrate using this technique by combining footage that would otherwise be very challenging and time intensive to achieve when using traditional techniques, with minimal artist intervention.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134588972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}