{"title":"Learning English to Chinese Character: Calligraphic Art Production based on Transformer","authors":"Yifan Jin, Yi Zhang, Xi Yang","doi":"10.1145/3476124.3488642","DOIUrl":"https://doi.org/10.1145/3476124.3488642","url":null,"abstract":"We propose a transformer-based model to learn Square Word Calligraphy to write English words in the format of a square that resembles Chinese characters. To achieve this task, we compose a dataset by collecting the calligraphic characters created by artist Xu Bing, and labeling the position of each alphabet in the characters. Taking the input of English alphabets, we introduce a modified transformer-based model to learn the position relationship between each alphabet and predict the transformation parameters for each part to reassemble them as a Chinese character. We show the comparison results between our predicted characters and corresponding characters created by the artist to indicate our proposed model has a good performance on this task, and we also created new characters to show the “creativity” of our model.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114473963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-Time Prediction-Driven Dynamics Simulation to Mitigate Frame Time Variation","authors":"Mackinnon Buck, C. Eckhardt","doi":"10.1145/3476124.3488633","DOIUrl":"https://doi.org/10.1145/3476124.3488633","url":null,"abstract":"This work introduces a prediction-driven real-time dynamics method that uses a graph-based state buffer to minimize the cost of mispredictions. Our technique reduces the average time needed for dynamics computation on the main thread by running the solver pipeline on a separate thread, enabling interactive multimedia applications to increase the computational budget for graphics at no cost perceptible to the end user.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121776460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Associating Real Objects with Virtual Models for VR Interaction","authors":"Wan-Ting Hsu, I-Chen Lin","doi":"10.1145/3476124.3488654","DOIUrl":"https://doi.org/10.1145/3476124.3488654","url":null,"abstract":"In this paper, we present a prototype system that is capable of associating real objects with virtual models and turning the table top into imaginary virtual scenes. A user can interact with these objects when she or he is immersed in the virtual environment. To accomplish this goal, a vision-based system is developed to online recognize and track the real objects in the scene. The corresponding virtual models are retrieved based on their tactile shapes. They are displayed and moved on a head-mounted display (HMD) according to tracked object poses. The experiment demonstrates that our prototype system can find reasonable association between real and virtual objects, and users are interested in the novel interaction.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120846188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic and Occlusion-Robust Light Field Illumination","authors":"M. Yasui, Yoshihiro Watanabe, M. Ishikawa","doi":"10.1145/3476124.3488624","DOIUrl":"https://doi.org/10.1145/3476124.3488624","url":null,"abstract":"There is high demand for dynamic and occlusion-robust illumination to improve lighting quality for portrait photography and assembly. Multiple projectors are required for the light field to achieve such illumination. This paper proposes a dynamic and occlusion-robust illumination technique by employing a light field formed by a lens array instead of using multiple projectors. Dynamic illumination is obtained by introducing a feedback system that follows the motion of the object. The designed lens array incorporates a wide viewing angle, making the system robust against occlusion. The proposed system was evaluated through projections onto a dynamic object.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123925946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Occlusion Robust Part-aware Object Classification through Part Attention and Redundant Features Suppression","authors":"Sohee Kim, Seungkyu Lee","doi":"10.1145/3476124.3488647","DOIUrl":"https://doi.org/10.1145/3476124.3488647","url":null,"abstract":"In recent studies, object classification with deep convolutional neural networks has shown poor generalization with occluded objects due to the large variation of occlusion situations. We propose a part-aware deep learning approach for occlusion robust object classification. To demonstrate the robustness of the method to unseen occlusion, we train our network without occluded object samples in training and test it with diverse occlusion samples. Proposed method shows improved classification performance on CIFAR10, STL10, and vehicles from PASCAL3D+ datasets.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121970191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TIEboard: Developing Kids Geometric Thinking through Tangible User Interface","authors":"Arooj Zaidi, Junichi Yamaoka","doi":"10.1145/3476124.3488623","DOIUrl":"https://doi.org/10.1145/3476124.3488623","url":null,"abstract":"This research is based on the concept of computing being embedded within the tangible product that acts as both input and output device eliminating the need of traditional computers for any feedback or guidance.The idea is inspired from traditional geoboard that focuses on the age group from 5 to 8 years old. The main goal is to integrate technology seamlessly into physical manipulative and while using this product kids will be able to make complex shapes that offer kids with memorable learning experience.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122967115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Allison Jing, Brandon J. Matthews, K. May, Thomas J. Clarke, Gun A. Lee, M. Billinghurst
{"title":"eyemR-Talk: Using Speech to Visualise Shared MR Gaze Cues","authors":"Allison Jing, Brandon J. Matthews, K. May, Thomas J. Clarke, Gun A. Lee, M. Billinghurst","doi":"10.1145/3476124.3488618","DOIUrl":"https://doi.org/10.1145/3476124.3488618","url":null,"abstract":"In this poster we present eyemR-Talk, a Mixed Reality (MR) collaboration system that uses speech input to trigger shared gaze visualisations between remote users. The system uses 360° panoramic video to support collaboration between a local user in the real world in an Augmented Reality (AR) view and a remote collaborator in Virtual Reality (VR). Using specific speech phrases to turn on virtual gaze visualisations, the system enables contextual speech-gaze interaction between collaborators. The overall benefit is to achieve more natural gaze awareness, leading to better communication and more effective collaboration.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127417562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visibility Enhancement for Transmissive Image using Synchronized Side-by-side Projector–Camera System","authors":"Kazuto Ogiwara, Hiroyuki Kubo","doi":"10.1145/3476124.3488622","DOIUrl":"https://doi.org/10.1145/3476124.3488622","url":null,"abstract":"Extracting the direct light component from light transport helps to enhance the visibility of a scene. In this paper, we describe a method to improve the visibility of the target object by capturing transmissive rays without scattering rays using a synchronized projector–camera system. A rolling shutter camera and a laser raster scanning projector are placed side-by-side, and both epipolar planes are optically aligned on a screen plane which is place between the projector and camera. This paper demonstrates that our method can visualize an internal object inside diluted milk.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121193970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-Stylized Neural Painter","authors":"Qian Wang, Cai Guo, Hongning Dai, Ping Li","doi":"10.1145/3476124.3488617","DOIUrl":"https://doi.org/10.1145/3476124.3488617","url":null,"abstract":"This work introduces Self-Stylized Neural Painter (SSNP) creating stylized artworks in a stroke-by-stroke manner. SSNP consists of digit artist, canvas, style-stroke generator (SSG). By using SSG to generate style strokes, SSNP creates different styles paintings based on the given images. We design SSG as a three-player game based on a generative adversarial network to produce pure-color strokes that are crucial for mimicking the physical strokes. Furthermore, the digital artist adjusts parameters of strokes (shape, size, transparency, and color) to reconstruct as much detailed content of the reference image as possible to improve the fidelity.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124551001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HWAuth: Handwriting-Based Socially-Inclusive Authentication","authors":"Joon Kuy Han, Byungkon Kang, Dennis Wong","doi":"10.1145/3476124.3488638","DOIUrl":"https://doi.org/10.1145/3476124.3488638","url":null,"abstract":"Small, local group of users who share private resources (e.g., families, university labs, business departments) usually have limited usable authentication needs. For these entities, existing authentication solutions either require excessive personal information (e.g., biometrics), do not distinguish each user (e.g., shared passwords), or lack security measures when the access key is compromised (e.g., physical keys). We propose an alternative solution by designing HWAuth: an inclusive group authentication system with a shared text that is uniquely identifiable for each user. Each user shares the same textual password, but individual handwriting styles of the text are used to distinguish each user. We evaluated the usability and security of our design through a user study with 30 participants. Our results suggest that (1) users who enter the same shared passwords are discernible from one another, and (2) that users were able to consistently login using HWAuth.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134045590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}