Mathieu Lutfallah, Christian Hirt, V. Gorobets, Manuel Gregor, A. Kunz
{"title":"Redirected Walking in Overlapping Rooms","authors":"Mathieu Lutfallah, Christian Hirt, V. Gorobets, Manuel Gregor, A. Kunz","doi":"10.1145/3588028.3603672","DOIUrl":"https://doi.org/10.1145/3588028.3603672","url":null,"abstract":"Walking in larger virtual environments than the physical one can lead to collisions with physical boundaries. Multiple locomotion techniques like Redirected Walking (RDW) and Overlapping Architecture (OA) aim to overcome this limitation. Combining these two has yet to be investigated in large physical spaces with resets. In this work, a hybrid locomotion method was implemented that combines RDW and OA. A user study was conducted where participants collected items in a virtual environment with multiple rooms. The study showed that the distance walked between resets was increased substantially, thus showing the solid advantages of combining OA and RDW.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123147477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Utilizing LiDAR Data for 3D Sound Source Localization","authors":"Masahiko Goto, Yasuhiro Oikawa, Atsuto Inoue, Wataru Teraoka, Takahiro Sato, Y. Iwane, Masahito Kobayashi","doi":"10.1145/3588028.3603682","DOIUrl":"https://doi.org/10.1145/3588028.3603682","url":null,"abstract":"This paper introduces a visualization system of 3D sound pressure distribution using a minimum variance distortionless response (MVDR) beamformer with Light Detection and Ranging (LiDAR) technology to estimate the sound source localization. By using LiDAR to capture 3D data, the proposed system calculates the time-averaged output power of the MVDR beamformer at the virtual source position for each point in the point cloud data. The results are then superimposed onto the 3D data to estimate sound sources. The proposed system provides a more visually comprehensible display of the sound pressure distribution in 3D.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123591393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kai Kohyama, Alexandre Berthault, Takuma Kato, Akihiko Shirai
{"title":"AI-Assisted Avatar Fashion Show: Word-to-Clothing Texture Exploration and Motion Synthesis for Metaverse UGC","authors":"Kai Kohyama, Alexandre Berthault, Takuma Kato, Akihiko Shirai","doi":"10.1145/3588028.3603660","DOIUrl":"https://doi.org/10.1145/3588028.3603660","url":null,"abstract":"ACM Reference Format: Kai Kohyama, Alexandre BERTHAULT, Takuma Kato, and Akihiko Shirai. 2023. AI-Assisted Avatar Fashion Show: Word-to-Clothing Texture Exploration and Motion Synthesis for Metaverse UGC. In Special Interest Group on Computer Graphics and Interactive Techniques Conference Posters (SIGGRAPH ’23 Posters), August 06–10, 2023, Los Angeles, CA, USA. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3588028.3603660","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128293895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Guided Training of NeRFs for Medical Volume Rendering","authors":"Kaloian Petkov","doi":"10.1145/3588028.3603657","DOIUrl":"https://doi.org/10.1145/3588028.3603657","url":null,"abstract":"Neural Radiance Fields (NeRF) trained on pre-rendered photorealistic images represent complex medical data in a fraction of the size, while interactive applications synthesize novel views directly from the neural networks. We demonstrate a practical implementation of NeRFs for high resolution CT volume data, using differentiable rendering for training view selection.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126879123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. S. Dhillon, Parisha Joshi, Jessica R. Baron, E. Patterson
{"title":"Robust Color Correction for Preserving Spatial Variations within Photographs","authors":"D. S. Dhillon, Parisha Joshi, Jessica R. Baron, E. Patterson","doi":"10.1145/3588028.3603681","DOIUrl":"https://doi.org/10.1145/3588028.3603681","url":null,"abstract":"Figure 1: A reference colorchart (left image) is commonly used for color correction which is an ill-posed problem. State-of-the-art root-polynomial regressionmethod reducesCIE XYZ or linear-RGB color differences for the transformed reference blocks in themean-sense. It improves significantly with the increasing regression order as CIEΔE is seen to drop with the increasing order. However, it does not account for spatial variations and produces serious artifacts as demonstrated here (center image). Proposed method improves color correction while preserving spatial variations, white-balancing appropriately and not over-damping the luminance as reported by Varghese et al. for their CIE_ΔE minimizing method.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130861949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SegAnimeChara: Segmenting Anime Characters Generated by AI","authors":"A. Tseng, Wen-Fan Wang, Bing-Yu Chen","doi":"10.1145/3588028.3603685","DOIUrl":"https://doi.org/10.1145/3588028.3603685","url":null,"abstract":"This work introduces SegAnimeChara, a novel system of transforming AI-generated anime images into game characters while retaining unique features. Using volume-based body pose segmentation, SegAnimeChara can efficiently, zero-shot, segment body parts from generative images based on OpenPose human skeleton. Furthermore, this system integrates a semantic segmentation pipeline based on the text prompts of the existing Text2Image workflow. The system conserves the game character’s unique outfit and reduces the redundant duplicate text prompts for semantic segmentation.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134222183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hikaru Hagura, R. Yamaguchi, T. Yoshihisa, Shinji Shimojo, Yukiko Kawai
{"title":"A Proposal of Acquiring and Analyzing Method for Distributed Litter on the Street using Smartphone Users as Passive Mobility Sensors","authors":"Hikaru Hagura, R. Yamaguchi, T. Yoshihisa, Shinji Shimojo, Yukiko Kawai","doi":"10.1145/3588028.3603684","DOIUrl":"https://doi.org/10.1145/3588028.3603684","url":null,"abstract":"With increased environmental protection activities, smartphone-enabled cleaning activities to deter street littering are gaining attention. We propose a method to analyze litter-on-road images captured by a smartphone camera mounted on a bicycle for users who do not require conscious care (Fig. 1). First, the user mounts the smartphone on a bicycle and starts the developed application, which creates a still image by capturing videos. The still images were then categorized using machine learning, and the type of trash was annotated in the images. Finally, to predict the distribution of trash, the probability of its influence on the environment, such as convenience stores and bars, was calculated using the machine learning model. This paper discusses our developed system’s efficacy for acquiring and analyzing methods on the road. As a fast effort, we verify the accuracy of tagging PET bottles, cans, food trays, and masks using a learning model generated by Detectron2.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127695483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eunhee Kim, Taehwa Park, J. Moon, Wonsang You, Taegwan Ha, Kyung-Joong Kim
{"title":"DAncing body, Speaking Hands (DASH): Sign Dance Generation System with Deep Learning","authors":"Eunhee Kim, Taehwa Park, J. Moon, Wonsang You, Taegwan Ha, Kyung-Joong Kim","doi":"10.1145/3588028.3603687","DOIUrl":"https://doi.org/10.1145/3588028.3603687","url":null,"abstract":"ACM Reference Format: Eunhee Kim, Taehwa Park, Jaeyoung Moon, Wonsang You, Taegwan Ha, and Kyung-Joong Kim. 2023. DAncing body, Speaking Hands (DASH): Sign Dance Generation System with Deep Learning. In Special Interest Group on Computer Graphics and Interactive Techniques Conference Posters (SIGGRAPH ’23 Posters), August 06–10, 2023, Los Angeles, CA, USA. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3588028.3603687","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115913465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kazuhito Sato, Shugo Yamaguchi, Tsukasa Takeda, S. Morishima
{"title":"Deformable Neural Radiance Fields for Object Motion Blur Removal","authors":"Kazuhito Sato, Shugo Yamaguchi, Tsukasa Takeda, S. Morishima","doi":"10.1145/3588028.3603692","DOIUrl":"https://doi.org/10.1145/3588028.3603692","url":null,"abstract":"In this paper, we present a novel approach to remove object motion blur in 3D scene renderings using deformable neural radiance fields. Our technique adapts the hyperspace representation to accommodate shape changes induced by object motion blur. Experiments on Blender-generated datasets demonstrate the effectiveness of our method in producing higher-quality images with reduced object motion blur artifacts.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121555690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adrian Xuan Wei Lim, L. Ng, Conor Griffin, Nicholas Kryer, Faraz Baghernezhad
{"title":"Reverse Projection: Real-Time Local Space Texture Mapping","authors":"Adrian Xuan Wei Lim, L. Ng, Conor Griffin, Nicholas Kryer, Faraz Baghernezhad","doi":"10.1145/3588028.3603653","DOIUrl":"https://doi.org/10.1145/3588028.3603653","url":null,"abstract":"We present Reverse Projection, a novel projective texture mapping technique for painting a decal directly to the texture of a 3D object. Designed to be used in games, this technique works in real-time. By using projection techniques that are computed in local space textures and outward-looking, users using low-end android devices to high-end gaming desktops are able to enjoy the personalization of their assets. We believe our proposed pipeline is a step in improving the speed and versatility of model painting.","PeriodicalId":113397,"journal":{"name":"ACM SIGGRAPH 2023 Posters","volume":"203 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117101453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}