{"title":"PlayGAMI","authors":"Uttam Grandhi, I. Y. Chang","doi":"10.1145/3305365.3329729","DOIUrl":"https://doi.org/10.1145/3305365.3329729","url":null,"abstract":"PlayGAMI is an augmented reality origami creativity platform. It has the fun of designing, folding origami and the magic of AR all in a single experience! Our platform lets a user draw on real origami paper and turn their creation into a virtual origami action figure/game character! Further, we use GANs that interpret certain drawn symbols to interactive game elements. The final customized design can be posted to an online 3D Gallery for viewing and sharing on social media.","PeriodicalId":367194,"journal":{"name":"ACM SIGGRAPH 2019 Appy Hour","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125204628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"sur.faced.io: augmented reality content creation for your face and beyond by drawing on paper","authors":"Yosun Chang","doi":"10.1145/3305365.3329730","DOIUrl":"https://doi.org/10.1145/3305365.3329730","url":null,"abstract":"We summarize several methods we have used to create software and processes for automated methods for content creation for augmented reality, virtual reality, and other 3D medium uses and beyond. We utilize processes involving, machine learning semantic segmentation, computer vision geometry recognition for automated texture mapping, photogrammetry 3D reconstruction from 2D images and videogrammetry video content, and more. A practical use in industry is an emphasis for each software example, and many are associated with awards.","PeriodicalId":367194,"journal":{"name":"ACM SIGGRAPH 2019 Appy Hour","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134554454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"REALITY","authors":"Akihiko Shirai","doi":"10.1145/3305365.3329727","DOIUrl":"https://doi.org/10.1145/3305365.3329727","url":null,"abstract":"\"REALITY\" is a live entertainment creation service for virtual avatars aka VTubers. It was created to enable a live entertainment ecosystem staffed with real time CG characters. The platform not only allows interaction with audience and VTubers using a professional motion capture studio, but also formatting virtual beings encouraging community engagements with a culture that is moving towards an avatar society using today's smartphone app that has real time facial capture.","PeriodicalId":367194,"journal":{"name":"ACM SIGGRAPH 2019 Appy Hour","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117296133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amir Semmo, M. Reimann, Mandy Klingbeil, Sumit Shekhar, Matthias Trapp, J. Döllner
{"title":"ViVid","authors":"Amir Semmo, M. Reimann, Mandy Klingbeil, Sumit Shekhar, Matthias Trapp, J. Döllner","doi":"10.1145/3305365.3329726","DOIUrl":"https://doi.org/10.1145/3305365.3329726","url":null,"abstract":"We present ViVid, a mobile app for iOS that empowers users to express dynamics in stylized Live Photos. This app uses state-of-the-art computer-vision techniques based on convolutional neural networks to estimate motion in the video footage that is captured together with a photo. Based on this analysis and best practices of contemporary art, photos can be stylized as a pencil drawing or cartoon look that includes design elements to visually suggest motion, such as ghosts, motion lines and halos. Its interactive parameterizations enable users to filter and art-direct composition variables, such as color, size and opacity. ViVid is based on Apple's CoreML, Metal and PhotoKit APIs for optimized on-device processing. Thus, the motion estimation is scheduled to utilize the dedicated neural engine, while shading-based image stylization is able to process the video footage in real-time on the GPU. This way, the app provides a unique tool for creating lively photo stylizations with ease.","PeriodicalId":367194,"journal":{"name":"ACM SIGGRAPH 2019 Appy Hour","volume":"38 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114088871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Menghe Zhang, Karen Lucknavalai, Weichen Liu, Kamran Alipour, J. Schulze
{"title":"ARCalVR","authors":"Menghe Zhang, Karen Lucknavalai, Weichen Liu, Kamran Alipour, J. Schulze","doi":"10.1145/3305365.3329732","DOIUrl":"https://doi.org/10.1145/3305365.3329732","url":null,"abstract":"With the development of ARKit and ARCore, mobile Augmented Reality (AR) applications have become popular. Our ARCalVR is a lightweight, open-source software environment to develop AR applications on Android devices, and it gives the programmer full control over the phone's resources. With ARCalVR, one can do 60fps marker-less AR on Android devices, including functionalities of more complex environment understanding, physical simulation, virtual object interaction and interaction between virtual objects and real environment.","PeriodicalId":367194,"journal":{"name":"ACM SIGGRAPH 2019 Appy Hour","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126763163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nira","authors":"Arash Keissami, Andrew E. Johnson, Dario Manesku","doi":"10.1145/3305365.3329731","DOIUrl":"https://doi.org/10.1145/3305365.3329731","url":null,"abstract":"Nira is an asset review and collaboration platform capable of rendering massive 3D production files in real time for interactive web-based viewing on any device, including lower-powered mobile smartphones and tablets. Nira achieves this by employing a custom server-side asset ingestion pipeline, a custom server-side real time renderer, a collection of intuitive markup and review tools for artists/designers, and existing hardware video encode/decode capabilities of both server-side and client-side devices.","PeriodicalId":367194,"journal":{"name":"ACM SIGGRAPH 2019 Appy Hour","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116938084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Aire: visualize air quality","authors":"N. Torres, Paulina Escalante Campbell","doi":"10.1145/3305365.3329869","DOIUrl":"https://doi.org/10.1145/3305365.3329869","url":null,"abstract":"An interactive and immersive AR experience, Aire enables anyone to learn about air pollution and the contaminants present in their environment. We leverage the use of information and technologies already available and provide a way to visualize complex scientific concepts concerning air pollution. Learning is the first step to making the world a better place.","PeriodicalId":367194,"journal":{"name":"ACM SIGGRAPH 2019 Appy Hour","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130198433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Derek Jacoby, Yvonne Coady, Eric Dahl, Andy Wynden, Matt Richardson
{"title":"VR tsunami!","authors":"Derek Jacoby, Yvonne Coady, Eric Dahl, Andy Wynden, Matt Richardson","doi":"10.1145/3305365.3329728","DOIUrl":"https://doi.org/10.1145/3305365.3329728","url":null,"abstract":"The Mod Squad lab at the University of Victoria is focused on research that combines geospatial analytics, cloud computing, and Virtual/Augmented Reality. VR Tsunami is an example of Serious Gaming that uses the interaction styles of video games to engage students in learning outcomes. The experience is based on real data from a real tsunami event, provided on a range of devices to teach middle school students about emergency preparedness in areas of British Columbia that are prone to tsunami activity.","PeriodicalId":367194,"journal":{"name":"ACM SIGGRAPH 2019 Appy Hour","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129405567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"UBeBot","authors":"A. Shapiro, A. Leuski, Stacy Marsella","doi":"10.1145/3305365.3329734","DOIUrl":"https://doi.org/10.1145/3305365.3329734","url":null,"abstract":"UBeBot allows a mobile user to create a 3D avatar of themselves using a photo, as well as dress and style the avatar. Users then record their voice, allowing the avatar to act our the content of the utterance, including lip sync, facial expressions, gestures and other body language. This animated performance is generated automatically by analyzing the recorded voice signal, and does not require any camera tracking. The 3D avatar can then be placed in augmented reality (A/R) and saved to a video for sharing on social media. A mobile user is thus able to create their own personalized, animated and voiced 3D A/R content. Performances can be saved and triggered to create the appearance of interactive conversations.","PeriodicalId":367194,"journal":{"name":"ACM SIGGRAPH 2019 Appy Hour","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134235622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}