Tsukasa Takeda, Shugo Yamaguchi, Kazuhito Sato, Kosuke Fukazawa, S. Morishima
{"title":"Efficient 3D Reconstruction of NeRF using Camera Pose Interpolation and Photometric Bundle Adjustment","authors":"Tsukasa Takeda, Shugo Yamaguchi, Kazuhito Sato, Kosuke Fukazawa, S. Morishima","doi":"10.1145/3588028.3603691","DOIUrl":"https://doi.org/10.1145/3588028.3603691","url":null,"abstract":"Figure 1: Comparison of the results of learning NeRF with the full camera poses and the interpolated-poses. The Full-Pose result uses the camera poses obtained by COLMAP with all the input images. The Interpolated-Pose result uses the poses obtained by COLMAP with several images and the interpolated poses between them as the initial poses. We apply Catmull-Rom Spline interpolation for translations and SLERP interpolation for rotations. The figure on the right shows the visualization of the camera poses using synthetic data. Interpolated-Pose generates images with almost the same quality as Full Pose.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132110557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chen-Chieh Liao, Jong-Hwan Kim, H. Koike, D. Hwang
{"title":"Content-Preserving Motion Stylization using Variational Autoencoder","authors":"Chen-Chieh Liao, Jong-Hwan Kim, H. Koike, D. Hwang","doi":"10.1145/3588028.3603679","DOIUrl":"https://doi.org/10.1145/3588028.3603679","url":null,"abstract":"This work proposes a motion style transfer network that transfers motion style between different motion categories using variational autoencoders. The proposed network effectively transfers style among various motion categories and can create stylized motion unseen in the dataset. The network contains a content-conditioned module to preserve the characteristic of the content motion, which is important for real applications. We implement the network with variational autoencoders, which enable us to control the intensity of the style and mix different styles to enrich the motion diversity.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134478227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Point Anywhere: Directed Object Estimation from Omnidirectional Images","authors":"Nanami Kotani, Asako Kanezaki","doi":"10.1145/3588028.3603650","DOIUrl":"https://doi.org/10.1145/3588028.3603650","url":null,"abstract":"One of the intuitive instruction methods in robot navigation is a pointing gesture. In this study, we propose a method using an omnidirectional camera to eliminate the user/object position constraint and the left/right constraint of the pointing arm. Although the accuracy of skeleton and object detection is low due to the high distortion of equirectangular images, the proposed method enables highly accurate estimation by repeatedly extracting regions of interest from the equirectangular image and projecting them onto perspective images. Furthermore, we found that training the likelihood of the target object in machine learning further improves the estimation accuracy.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127769295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tidd: Augmented Tabletop Interaction Supports Children with Autism to Train Daily Living Skills","authors":"Qin Wu, Wenlu Wang, Qianru Liu, Jiashuo Cao, Duo Xu, Suranga Nanayakkara","doi":"10.1145/3588028.3603674","DOIUrl":"https://doi.org/10.1145/3588028.3603674","url":null,"abstract":"Children with autism may have difficulties in learning daily living skills due to repetitive behavior, which poses a challenge to their independent living training. Previous studies have shown the potential of using interactive technology to help children with autism train daily living skills. In this poster, we present Tidd, an interactive device based on desktop augmented reality projection, designed to support children with autism in daily living skills training. The system combines storytelling with Applied Behavior Analysis (ABA) therapy to scaffold the training process. A pilot study was conducted on 13 children with autism in two autism rehabilitation centers. The results showed that Tidd helped children with autism learn bed-making and dressing skills while engaging in the training process.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124360417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kensuke Katori, Kenta Yamamoto, Ippei Suzuki, Tatsuki Fushimi, Y. Ochiai
{"title":"Crossed half-silvered Mirror Array: Fabrication and Evaluation of a See-Through Capable DIY Crossed Mirror Array","authors":"Kensuke Katori, Kenta Yamamoto, Ippei Suzuki, Tatsuki Fushimi, Y. Ochiai","doi":"10.1145/3588028.3603644","DOIUrl":"https://doi.org/10.1145/3588028.3603644","url":null,"abstract":"Crossed mirror arrays (CMAs) have recently been employed in simple retinal projection augmented reality (AR) devices owing to their wide field of view and nonfocal nature. However, they remain inadequate for AR devices for everyday use owing to the limited visibility of the physical environment. This study aims to enhance the transmittance of the CMA by fabricating it with half-silvered acrylic mirrors. Further, we evaluated the transmittance and quality of the retinal display. The proposed CMA successfully achieved sufficient retinal projection and higher see-through capability, making it more suitable for use in AR devices than conventional CMAs.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128983080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaohui Jiao, Yuzhong Chen, Zhaoliang Liu, Danying Wang, Wen-Hui Zhou, Li Zhang, Yue Wang
{"title":"Photo-Realistic Streamable Free-Viewpoint Video","authors":"Shaohui Jiao, Yuzhong Chen, Zhaoliang Liu, Danying Wang, Wen-Hui Zhou, Li Zhang, Yue Wang","doi":"10.1145/3588028.3603666","DOIUrl":"https://doi.org/10.1145/3588028.3603666","url":null,"abstract":"We present a novel free-viewpoint video(FVV) framework for capturing, processing and compressing the volumetric content for immersive VR/AR experience. Compared to previous FVV capture systems, we propose an easy-to-use multi-camera array consisting of mobile phones with time synchronization. In order to generate photo-realistic FVV results with sparse multi-camera input, we improve the novel view synthesis method by introducing visual hull guided neural representation, called VH-NeRF. Our VH-NeRF combines the advantages of both explicit models by traditional 3D reconstruction and the notable implicit representation of Neural Radiance Field. Each dynamic entity’s VH-NeRF is learned and supervised by the visual hull reconstructed data, and can be further edited for complex and large-scale dynamic scenes. Moreover, our FVV solution can do both effective compression and transmission on multi-perspective videos, as well as real-time rendering on consumer-grade hardware. To the best of our knowledge, our work is the first solution for photo-realistic FVV captured by sparse multi-camera array, and allow real-time live streaming of large-scale dynamic scenes for immersive VR and AR applications on mobile devices.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124879836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elliot Dickman, P. Diefenbach, Matthew Burlick, M. Stockton
{"title":"Smart Scaling: A Hybrid Deep-Learning Approach to Content-Aware Image Retargeting","authors":"Elliot Dickman, P. Diefenbach, Matthew Burlick, M. Stockton","doi":"10.1145/3588028.3603671","DOIUrl":"https://doi.org/10.1145/3588028.3603671","url":null,"abstract":"ACM Reference Format: Elliot Dickman, Paul Diefenbach, Matthew Burlick, and Mark Stockton. 2023. Smart Scaling: A Hybrid Deep-Learning Approach to Content-Aware Image Retargeting. In Special Interest Group on Computer Graphics and Interactive Techniques Conference Posters (SIGGRAPH ’23 Posters), August 06–10, 2023, Los Angeles, CA, USA. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3588028.3603671","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133990898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image Printing on Stones, Wood, and More","authors":"Abdalla G. M. Ahmed","doi":"10.1145/3588028.3603686","DOIUrl":"https://doi.org/10.1145/3588028.3603686","url":null,"abstract":"","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131285495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}