{"title":"Collaborative animation production from students' perspective: creating short 3D CG films through international team-work","authors":"B. Barbieri, Naomi Hutchens, Kayleigh Harrison","doi":"10.1145/3230744.3230756","DOIUrl":"https://doi.org/10.1145/3230744.3230756","url":null,"abstract":"Massive Collaborative Animation Projects (MCAP) was founded in 2016 by Dr. William Joel (Western Connecticut State University) to test students' collaborative abilities and provide experience that will allow them to grow professionally and academically. The MCAP 1 production is a children's ghost story designed to test the massive collaborative structure. The goal of MCAP 2 is to create an animation for use in planetariums worldwide. Currently, there are nearly one hundred student contributors from universities in Alaska, California, Colorado, Connecticut, Japan, Michigan, South Korea, and Taiwan.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122309829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yen-Chih Chiang, Shih-Song Cheng, Huei-Siou Chen, Le-Jean Wei, Li-Min Huang, David K. T. Chu
{"title":"Retinal resolution display technology brings impact to VR industry","authors":"Yen-Chih Chiang, Shih-Song Cheng, Huei-Siou Chen, Le-Jean Wei, Li-Min Huang, David K. T. Chu","doi":"10.1145/3230744.3230781","DOIUrl":"https://doi.org/10.1145/3230744.3230781","url":null,"abstract":"Currently1, Visual Reality Head-mounted Display has several problems that need to be overcome, such as insufficient resolution of the display, latency, Vergence-accommodation Conflict, etc., while the resolution is not high enough, causing the virtual image of the display to have graininess or Screen-door Effect. These problems have brought VR users an imperfect image quality experience and are unable to achieve a good sense of immersion. Therefore, it is necessary to solve the problem of insufficient display resolution. INT TECH Co., is working towards this goal and has made very good progress.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133707660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Volume: 3D reconstruction of history for immersive platforms","authors":"Or Fleisher, Shirin Anlen","doi":"10.1145/3230744.3230791","DOIUrl":"https://doi.org/10.1145/3230744.3230791","url":null,"abstract":"This paper presents Volume, a software toolkit that enables users to experiment with expressive reconstructions of archival and/or historical materials as volumetric renderings. Making use of contemporary deep learning methods, Volume re-imagines 2D images as volumetric 3D assets. These assets can then be incorporated into virtual, augmented and mixed reality experiences.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"195 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115445373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Realistic post-processing of rendered 3D scenes","authors":"A. Feygina, D. Ignatov, Ilya Makarov","doi":"10.1145/3230744.3230764","DOIUrl":"https://doi.org/10.1145/3230744.3230764","url":null,"abstract":"In this talk, we show a realistic post-processing rendering based on generative adversarial network CycleWGAN. We propose to use CycleGAN architecture and Wasserstein loss function with additional identity component in order to transfer graphics from Grand Theft Auto V to the older version of GTA video-game, Grand Theft Auto: San Andreas. We aim to present the application of modern art style transfer and unpaired image-to-image translations methods for graphics improvement using deep neural networks with adversarial loss.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117251062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"I am afraid: voice as sonic sculpture","authors":"M. Lantin, S. Overstall, Hongzhu Zhao","doi":"10.1145/3230744.3230766","DOIUrl":"https://doi.org/10.1145/3230744.3230766","url":null,"abstract":"We present a multi-user networked VR application, I Am Afraid, which uses voice as an interface to create sonic objects in a virtual environment. Words are spoken and added to the environment as three-dimensional textual objects. Other vocalizations are rendered as abstract shapes. The sculptural elements embed the sound of the voice that initiated their creation, and can be played as instruments via user-controlled interactions such as scrubbing, shaking, or looping. Multiple users can simultaneously be in the environment, mixing their voices in an evolving, dynamic, sound sculpture. I Am Afraid has been used for fun, performance, and therapeutic purposes.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124830482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"BOLCOF","authors":"Kenta Yamamoto, Riku Iwasaki, Tatsuya Minagawa, Ryota Kawamura, Bektur Ryskeldiev, Yoichi Ochiai","doi":"10.1145/3230744.3230768","DOIUrl":"https://doi.org/10.1145/3230744.3230768","url":null,"abstract":"3D printing failures can occur without completion of printing process due to shaking, errors in printer settings, and shape of the support material and 3D model. In such case it could be difficult to restart printing process from the last printed layer in conventional 3D printers, as the printing parts to which the nozzles are supposed to be attached are lost. In order to restart printing from the middle layer, Wu et al.[Wu et al. 2017] proposed a method of printing while rotating the base of a 3D printer. However, such approach required time for two objects to bond after segmentation, with limited availability of methods for adhesion between parts. Wu et al.[Wu et al. 2016] have also proposed a method to print 3D models at any angle through 5-axis rotation of the base of a 3D printer, but the manufacturing cost of such approach was relatively high. Therefore, we propose a system that prints 3D models on existing object by utilizing an infrared depth camera. Our method makes it possible to attach a 3D-printed object into a free-formed object in the middle of printing by recognizing its shape with a depth camera.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129538134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A seamless texture color adjustment method for large-scale terrain reconstruction","authors":"Hye-Sun Kim, Yun-Ji Ban, Changjoon Park","doi":"10.1145/3230744.3230788","DOIUrl":"https://doi.org/10.1145/3230744.3230788","url":null,"abstract":"We present a technique to generate realistic high quality texture with no seams suitable to reconstruct large-scale 3D terrains. We focused on adjusting color difference caused by camera variations and illumination transition for texture reconstruction pipelines. Seams between separated processing areas should also be considered important in large terrain models. The proposed technique corrects these problems by normalizing texture colors and interpolating texture adjustment colors.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116542293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chloe LeGendre, Kalle Bladin, B. Kishore, Xinglei Ren, Xueming Yu, P. Debevec
{"title":"Efficient multispectral facial capture with monochrome cameras","authors":"Chloe LeGendre, Kalle Bladin, B. Kishore, Xinglei Ren, Xueming Yu, P. Debevec","doi":"10.1145/3230744.3230778","DOIUrl":"https://doi.org/10.1145/3230744.3230778","url":null,"abstract":"We propose a variant to polarized gradient illumination facial scanning which uses monochrome instead of color cameras to achieve more efficient and higher-resolution results. In typical polarized gradient facial scanning, sub-millimeter geometric detail is acquired by photographing the subject in eight or more polarized spherical gradient lighting conditions made with white LEDs, and RGB cameras are used to acquire color texture maps of the subject's appearance. In our approach, we replace the color cameras and white LEDs with monochrome cameras and multispectral, colored LEDs, leveraging that color images can be formed from successive monochrome images recorded under different illumination colors. While a naive extension of the scanning process to this setup would require multiplying the number of images by number of color channels, we show that the surface detail maps can be estimated directly from monochrome imagery, so that only an additional n photographs are required, where n is the number of added spectral channels. We also introduce a new multispectral optical flow approach to align images across spectral channels in the presence of slight subject motion. Lastly, for the case where a capture system's white light sources are polarized and its multispectral colored LEDs are not, we introduce the technique of multispectral polarization promotion, where we estimate the cross- and parallel-polarized monochrome images for each spectral channel from their corresponding images under a full sphere of even, unpolarized illumination. We demonstrate that this technique allows us to efficiently acquire a full color (or even multispectral) facial scan using monochrome cameras, unpolarized multispectral colored LEDs, and polarized white LEDs.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123802476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Néill O’dwyer, Nicholas Johnson, R. Pagés, Jan Ondřej, Konstantinos Amplianitis, Enda Bates, David S. Monaghan, A. Smolic
{"title":"Beckett in VR: exploring narrative using free viewpoint video","authors":"Néill O’dwyer, Nicholas Johnson, R. Pagés, Jan Ondřej, Konstantinos Amplianitis, Enda Bates, David S. Monaghan, A. Smolic","doi":"10.1145/3230744.3230774","DOIUrl":"https://doi.org/10.1145/3230744.3230774","url":null,"abstract":"This poster describes a reinterpretation of Samuel Beckett's theatrical text Play for virtual reality (VR). It is an aesthetic reflection on practice that follows up an a technical project description submitted to ISMAR 2017 [O'Dwyer et al. 2017]. Actors are captured in a green screen environment using free-viewpoint video (FVV) techniques, and the scene is built in a game engine, complete with binaural spatial audio and six degrees of freedom of movement. The project explores how ludic qualities in the original text help elicit the conversational and interactive specificities of the digital medium. The work affirms the potential for interactive narrative in VR, opens new experiences of the text, and highlights the reorganisation of the author-audience dynamic.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134018188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive teaching aids design for essentials of anatomy and physiology: using bones and muscles as example","authors":"Hui-Ju Chen, Zi-Xin You, Yun-Ho Yu, Jen-Ming Chen, Chia-Chun Chang, C. Chou","doi":"10.1145/3230744.3230808","DOIUrl":"https://doi.org/10.1145/3230744.3230808","url":null,"abstract":"Learning essentials of anatomy and physiology[R. Richardson et al. 2018] can make students knowing more about the connection between bones and muscles of human bodies. In the past, we can only use books, pictures, videos or fixed bone model to teach. This kind of teaching may suit for student over 15. But, for student under 15, it's hard to increase their interest or studying time for learning. If there are some models that can be assembled during the class, as Alison James[A. James et al. 2014] said, building LEGO helps us to think more about the 3D shape of the object. Can also increase student's interest of learning.","PeriodicalId":226759,"journal":{"name":"ACM SIGGRAPH 2018 Posters","volume":"376 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132131897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}