ACM SIGGRAPH 2023 Posters最新文献

筛选
英文 中文
Efficient 3D Reconstruction of NeRF using Camera Pose Interpolation and Photometric Bundle Adjustment 使用相机姿态插值和光度束调整的NeRF高效三维重建
ACM SIGGRAPH 2023 Posters Pub Date : 2023-07-23 DOI: 10.1145/3588028.3603691
Tsukasa Takeda, Shugo Yamaguchi, Kazuhito Sato, Kosuke Fukazawa, S. Morishima
{"title":"Efficient 3D Reconstruction of NeRF using Camera Pose Interpolation and Photometric Bundle Adjustment","authors":"Tsukasa Takeda, Shugo Yamaguchi, Kazuhito Sato, Kosuke Fukazawa, S. Morishima","doi":"10.1145/3588028.3603691","DOIUrl":"https://doi.org/10.1145/3588028.3603691","url":null,"abstract":"Figure 1: Comparison of the results of learning NeRF with the full camera poses and the interpolated-poses. The Full-Pose result uses the camera poses obtained by COLMAP with all the input images. The Interpolated-Pose result uses the poses obtained by COLMAP with several images and the interpolated poses between them as the initial poses. We apply Catmull-Rom Spline interpolation for translations and SLERP interpolation for rotations. The figure on the right shows the visualization of the camera poses using synthetic data. Interpolated-Pose generates images with almost the same quality as Full Pose.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132110557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Content-Preserving Motion Stylization using Variational Autoencoder 使用变分自编码器的内容保留运动样式化
ACM SIGGRAPH 2023 Posters Pub Date : 2023-07-23 DOI: 10.1145/3588028.3603679
Chen-Chieh Liao, Jong-Hwan Kim, H. Koike, D. Hwang
{"title":"Content-Preserving Motion Stylization using Variational Autoencoder","authors":"Chen-Chieh Liao, Jong-Hwan Kim, H. Koike, D. Hwang","doi":"10.1145/3588028.3603679","DOIUrl":"https://doi.org/10.1145/3588028.3603679","url":null,"abstract":"This work proposes a motion style transfer network that transfers motion style between different motion categories using variational autoencoders. The proposed network effectively transfers style among various motion categories and can create stylized motion unseen in the dataset. The network contains a content-conditioned module to preserve the characteristic of the content motion, which is important for real applications. We implement the network with variational autoencoders, which enable us to control the intensity of the style and mix different styles to enrich the motion diversity.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134478227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Point Anywhere: Directed Object Estimation from Omnidirectional Images 点在任何地方:有向目标估计从全向图像
ACM SIGGRAPH 2023 Posters Pub Date : 2023-07-23 DOI: 10.1145/3588028.3603650
Nanami Kotani, Asako Kanezaki
{"title":"Point Anywhere: Directed Object Estimation from Omnidirectional Images","authors":"Nanami Kotani, Asako Kanezaki","doi":"10.1145/3588028.3603650","DOIUrl":"https://doi.org/10.1145/3588028.3603650","url":null,"abstract":"One of the intuitive instruction methods in robot navigation is a pointing gesture. In this study, we propose a method using an omnidirectional camera to eliminate the user/object position constraint and the left/right constraint of the pointing arm. Although the accuracy of skeleton and object detection is low due to the high distortion of equirectangular images, the proposed method enables highly accurate estimation by repeatedly extracting regions of interest from the equirectangular image and projecting them onto perspective images. Furthermore, we found that training the likelihood of the target object in machine learning further improves the estimation accuracy.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127769295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tidd: Augmented Tabletop Interaction Supports Children with Autism to Train Daily Living Skills Tidd:增强桌面互动支持自闭症儿童训练日常生活技能
ACM SIGGRAPH 2023 Posters Pub Date : 2023-07-23 DOI: 10.1145/3588028.3603674
Qin Wu, Wenlu Wang, Qianru Liu, Jiashuo Cao, Duo Xu, Suranga Nanayakkara
{"title":"Tidd: Augmented Tabletop Interaction Supports Children with Autism to Train Daily Living Skills","authors":"Qin Wu, Wenlu Wang, Qianru Liu, Jiashuo Cao, Duo Xu, Suranga Nanayakkara","doi":"10.1145/3588028.3603674","DOIUrl":"https://doi.org/10.1145/3588028.3603674","url":null,"abstract":"Children with autism may have difficulties in learning daily living skills due to repetitive behavior, which poses a challenge to their independent living training. Previous studies have shown the potential of using interactive technology to help children with autism train daily living skills. In this poster, we present Tidd, an interactive device based on desktop augmented reality projection, designed to support children with autism in daily living skills training. The system combines storytelling with Applied Behavior Analysis (ABA) therapy to scaffold the training process. A pilot study was conducted on 13 children with autism in two autism rehabilitation centers. The results showed that Tidd helped children with autism learn bed-making and dressing skills while engaging in the training process.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124360417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crossed half-silvered Mirror Array: Fabrication and Evaluation of a See-Through Capable DIY Crossed Mirror Array 交叉半银镜阵列:一种可透视的DIY交叉镜阵列的制造与评估
ACM SIGGRAPH 2023 Posters Pub Date : 2023-07-23 DOI: 10.1145/3588028.3603644
Kensuke Katori, Kenta Yamamoto, Ippei Suzuki, Tatsuki Fushimi, Y. Ochiai
{"title":"Crossed half-silvered Mirror Array: Fabrication and Evaluation of a See-Through Capable DIY Crossed Mirror Array","authors":"Kensuke Katori, Kenta Yamamoto, Ippei Suzuki, Tatsuki Fushimi, Y. Ochiai","doi":"10.1145/3588028.3603644","DOIUrl":"https://doi.org/10.1145/3588028.3603644","url":null,"abstract":"Crossed mirror arrays (CMAs) have recently been employed in simple retinal projection augmented reality (AR) devices owing to their wide field of view and nonfocal nature. However, they remain inadequate for AR devices for everyday use owing to the limited visibility of the physical environment. This study aims to enhance the transmittance of the CMA by fabricating it with half-silvered acrylic mirrors. Further, we evaluated the transmittance and quality of the retinal display. The proposed CMA successfully achieved sufficient retinal projection and higher see-through capability, making it more suitable for use in AR devices than conventional CMAs.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128983080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Photo-Realistic Streamable Free-Viewpoint Video 照片逼真的流媒体自由视点视频
ACM SIGGRAPH 2023 Posters Pub Date : 2023-07-23 DOI: 10.1145/3588028.3603666
Shaohui Jiao, Yuzhong Chen, Zhaoliang Liu, Danying Wang, Wen-Hui Zhou, Li Zhang, Yue Wang
{"title":"Photo-Realistic Streamable Free-Viewpoint Video","authors":"Shaohui Jiao, Yuzhong Chen, Zhaoliang Liu, Danying Wang, Wen-Hui Zhou, Li Zhang, Yue Wang","doi":"10.1145/3588028.3603666","DOIUrl":"https://doi.org/10.1145/3588028.3603666","url":null,"abstract":"We present a novel free-viewpoint video(FVV) framework for capturing, processing and compressing the volumetric content for immersive VR/AR experience. Compared to previous FVV capture systems, we propose an easy-to-use multi-camera array consisting of mobile phones with time synchronization. In order to generate photo-realistic FVV results with sparse multi-camera input, we improve the novel view synthesis method by introducing visual hull guided neural representation, called VH-NeRF. Our VH-NeRF combines the advantages of both explicit models by traditional 3D reconstruction and the notable implicit representation of Neural Radiance Field. Each dynamic entity’s VH-NeRF is learned and supervised by the visual hull reconstructed data, and can be further edited for complex and large-scale dynamic scenes. Moreover, our FVV solution can do both effective compression and transmission on multi-perspective videos, as well as real-time rendering on consumer-grade hardware. To the best of our knowledge, our work is the first solution for photo-realistic FVV captured by sparse multi-camera array, and allow real-time live streaming of large-scale dynamic scenes for immersive VR and AR applications on mobile devices.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124879836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Palette-Based Colorization for Vector Icons 基于调色板的矢量图标着色
ACM SIGGRAPH 2023 Posters Pub Date : 2023-07-23 DOI: 10.1145/3588028.3603668
Miao Lin, I-Chao Shen, Hsiao-Yuan Chin, Ruo-Xi Chen, Bing-Yu Chen
{"title":"Palette-Based Colorization for Vector Icons","authors":"Miao Lin, I-Chao Shen, Hsiao-Yuan Chin, Ruo-Xi Chen, Bing-Yu Chen","doi":"10.1145/3588028.3603668","DOIUrl":"https://doi.org/10.1145/3588028.3603668","url":null,"abstract":"","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116435653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart Scaling: A Hybrid Deep-Learning Approach to Content-Aware Image Retargeting 智能缩放:内容感知图像重定向的混合深度学习方法
ACM SIGGRAPH 2023 Posters Pub Date : 2023-07-23 DOI: 10.1145/3588028.3603671
Elliot Dickman, P. Diefenbach, Matthew Burlick, M. Stockton
{"title":"Smart Scaling: A Hybrid Deep-Learning Approach to Content-Aware Image Retargeting","authors":"Elliot Dickman, P. Diefenbach, Matthew Burlick, M. Stockton","doi":"10.1145/3588028.3603671","DOIUrl":"https://doi.org/10.1145/3588028.3603671","url":null,"abstract":"ACM Reference Format: Elliot Dickman, Paul Diefenbach, Matthew Burlick, and Mark Stockton. 2023. Smart Scaling: A Hybrid Deep-Learning Approach to Content-Aware Image Retargeting. In Special Interest Group on Computer Graphics and Interactive Techniques Conference Posters (SIGGRAPH ’23 Posters), August 06–10, 2023, Los Angeles, CA, USA. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3588028.3603671","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133990898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Printing on Stones, Wood, and More 图像印刷上的石头,木材,和更多
ACM SIGGRAPH 2023 Posters Pub Date : 2023-07-23 DOI: 10.1145/3588028.3603686
Abdalla G. M. Ahmed
{"title":"Image Printing on Stones, Wood, and More","authors":"Abdalla G. M. Ahmed","doi":"10.1145/3588028.3603686","DOIUrl":"https://doi.org/10.1145/3588028.3603686","url":null,"abstract":"","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131285495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational Design of Nebuta-like Paper-on-Wire Artworks 类似星云的纸线艺术品的计算设计
ACM SIGGRAPH 2023 Posters Pub Date : 2023-07-23 DOI: 10.1145/3588028.3603655
Naoki Agata, Anran Qi, Yuta Noma, I-Chao Shen, T. Igarashi
{"title":"Computational Design of Nebuta-like Paper-on-Wire Artworks","authors":"Naoki Agata, Anran Qi, Yuta Noma, I-Chao Shen, T. Igarashi","doi":"10.1145/3588028.3603655","DOIUrl":"https://doi.org/10.1145/3588028.3603655","url":null,"abstract":"Figure 1: Our method takes a 3D model as the input (a), extracts the 3D wireframe (b) and computes the corresponding 2D pattern. (c) is the 3D printed plastic wireframe; (d) is the approximated developable patches; (e) is the fabricated model from two viewpoints using washi paper; (f) shows a Nebuta exhibited in the Aomori Nebuta Festival © Aomori Prefecture, Aomori Prefectural Organization for Tourism and Globalization (https://aomori-tourism.com/photos/detail_41.html). ACM Reference Format: Naoki Agata, Anran Qi, Yuta Noma, I-Chao Shen, and Takeo Igarashi. 2023. Computational Design of Nebuta-like Paper-on-Wire Artworks. In Special Interest Group on Computer Graphics and Interactive Techniques Conference Posters (SIGGRAPH ’23 Posters), August 06–10, 2023, Los Angeles, CA, USA. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3588028.3603655","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116618810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信