Kunal Gupta, Yuewei Zhang, Yun Suen Pai, M. Billinghurst
{"title":"VR-Wizard: Towards an Emotion-Adaptive Experience in VR","authors":"Kunal Gupta, Yuewei Zhang, Yun Suen Pai, M. Billinghurst","doi":"10.1145/3476124.3488657","DOIUrl":"https://doi.org/10.1145/3476124.3488657","url":null,"abstract":"In this research, we investigate the impact of real-time biofeedback-based emotion adaptive Virtual Reality (VR) environments on the immersiveness, game engagement, and flow state using physiological information such as Electroencephalogram (EEG), Electrodermal Activity (EDA), and Heart Rate Variability (HRV). For this, we designed VR-Wizard, a personalized emotion-adaptive VR game akin to a Harry Potter experience with an objective to collect items in the forbidden forest. The users initially train the system through a calibration process. Next, they explore the forest with adapting environmental factors based on a ’MagicMeter’ indicating the user’s real-time emotional states. The overall goal is to provide more personalized, immersed, and engaging emotional virtual experiences.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126165863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ryo Hajika, Tamil Selvan Gunasekaran, Alaeddin Nassani, Yun Suen Pai, M. Billinghurst
{"title":"VRTwitch: Enabling Micro-motions in VR with Radar Sensing","authors":"Ryo Hajika, Tamil Selvan Gunasekaran, Alaeddin Nassani, Yun Suen Pai, M. Billinghurst","doi":"10.1145/3476124.3488650","DOIUrl":"https://doi.org/10.1145/3476124.3488650","url":null,"abstract":"Micro-motions are often difficult to incorporate in Virtual Reality (VR) while macro-motions are a popular interaction method, due to technological limitations with VR tracking methods. In this poster, we introduce VRTwitch, a forearm-mounted wearable device that is able to sense micro hand motions. VRTwitch uses an array of reconfigurable miniaturized radar sensors placed around the hand to capture subtle finger movements for gesture detection towards enhanced interaction in VR space. We created a simple interactive VR shooting game that requires precise finger motion for virtual gun manipulation as a demonstration.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125516458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive Manga Colorization with Fast Flat Coloring","authors":"Delong Ouyang, Ryosuke Furuta, Yugo Shimizu, Y. Taniguchi, Ryota Hinami, Shonosuke Ishiwatari","doi":"10.1145/3476124.3488628","DOIUrl":"https://doi.org/10.1145/3476124.3488628","url":null,"abstract":"This paper proposes an interactive semi-automatic system for manga colorization. In our system, users can colorize monochrome manga images interactively by scribbling the desired colors. The proposed method creates a high quality colorized image by inputting the original monochrome image after grayscale adjustment and the flat colored image generated from the scribbles to a colorization network. Experiments show that the colorized results yielded by the proposed method are much better than existing methods.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115021184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sewhan Chun, Daegeun Choe, Shindong Kang, Shounan An, Youngbak Jo, Insoo Oh
{"title":"Emotion Guided Speech-Driven Facial Animation","authors":"Sewhan Chun, Daegeun Choe, Shindong Kang, Shounan An, Youngbak Jo, Insoo Oh","doi":"10.1145/3476124.3488649","DOIUrl":"https://doi.org/10.1145/3476124.3488649","url":null,"abstract":"The modern deep neural network has allowed an applicable level of speech-driven facial animation, simulating natural and precise 3D animation from speech data. Regardless, many of the works show weakness in drastic emotional expression and flexibility of the animation. In this work, we introduce emotion guided speech-driven facial animation, simultaneously proceeding with classification and regression from the speech data to generate a controllable level of evident emotional expression on facial animation. Performance using our method shows reasonable expressiveness of facial emotion with controllable flexibility. Extensive experiments indicate that our method generates more expressive facial animation with controllable flexibility compared to previous approaches.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115260872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chaitra Desai, R. Tabib, Saikumar Reddy, Ujwala Patil, U. Mudenagudi
{"title":"Rendering of Synthetic Underwater Images Towards Restoration","authors":"Chaitra Desai, R. Tabib, Saikumar Reddy, Ujwala Patil, U. Mudenagudi","doi":"10.1145/3476124.3488637","DOIUrl":"https://doi.org/10.1145/3476124.3488637","url":null,"abstract":"In this paper, we propose to render synthetic underwater images considering revised image formation model, towards modeling restoration. Underwater images suffer from low contrast, color cast and haze due to dynamically varying properties of water, floating particles and submerged sediments. Due to this, light reaching the object, from the surface is subjected to direct scattering, backscattering and forward scattering. We propose to model this varying nature of light to render synthetic underwater images based on depth information and inherent optical properties of Jerlov water types.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129515644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Patching Non-Uniform Extraordinary Points with Sharp Features","authors":"Yifei Feng, Xin Li, Chunzhi Yuan, L. Shen","doi":"10.1145/3476124.3488626","DOIUrl":"https://doi.org/10.1145/3476124.3488626","url":null,"abstract":"Extending the non-uniform rational B-spline (NURBS) representation to arbitrary topology is one of the most important steps to define the iso-geometric analysis (IGA) suitable geometry. The approach needs to be NURBS-compatible and can handle non-uniform knot intervals. To achieve this goal, we present a novel patching solution which define one Bézier patch for each non-zero knot interval control grid face. The construction can reproduce the bi-cubic NURBS in the regular face and define bi-quintic Bézier patches for irregular faces. The method can also support non-uniform sharp features along the extraordinary points. Experimental results demonstrate that the new surfaces can improve the surface quality for non-uniform parameterization.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122402587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Monte Carlo Denoising with a Sparse Auxiliary Feature Encoder","authors":"Siyuan Fu, Yifan Lu, Xiao Hua Zhang, Ning Xie","doi":"10.1145/3476124.3488631","DOIUrl":"https://doi.org/10.1145/3476124.3488631","url":null,"abstract":"Fast Denoising Monte Carlo path tracing is very desirable. Existing learning-based real-time methods concatenate auxiliary buffers (i.e., albedo, normal, and depth) with noisy colors as input. Such structures cannot effectively extract rich information from auxiliary buffers, however. In this work, we facilitate the U-shape kernel-prediction network with a sparse auxiliary feature encoder. Sparse convolutions can focus solely on regions whose inputs have changed and reuse the history features in other regions. With sparse convolutions, the computational complexity of the auxiliary feature encoder is reduced by 50-70% without apparent performance drops.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133155605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Class Balanced Sampling for the Training in GANs","authors":"Sanghun Kim, Seungkyu Lee","doi":"10.1145/3476124.3488634","DOIUrl":"https://doi.org/10.1145/3476124.3488634","url":null,"abstract":"Recently Top-k fake sample selection has been introduced to provide better gradients for training Generative Adversarial Networks. Since the method does not guarantee class balance of selected samples in class conditional GANs, certain classes can be completely ignored in the training. In this work, we propose class standardized critic score based sample selection which enables class balanced sample selection. Our method achieves improved FID score and Intra-FID score compared to prior Top-k selection.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114660335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}