SIGGRAPH Asia 2021 Posters最新文献

筛选
英文 中文
Learning English to Chinese Character: Calligraphic Art Production based on Transformer 从英语到汉字的学习:基于《变形金刚》的书法艺术创作
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488642
Yifan Jin, Yi Zhang, Xi Yang
{"title":"Learning English to Chinese Character: Calligraphic Art Production based on Transformer","authors":"Yifan Jin, Yi Zhang, Xi Yang","doi":"10.1145/3476124.3488642","DOIUrl":"https://doi.org/10.1145/3476124.3488642","url":null,"abstract":"We propose a transformer-based model to learn Square Word Calligraphy to write English words in the format of a square that resembles Chinese characters. To achieve this task, we compose a dataset by collecting the calligraphic characters created by artist Xu Bing, and labeling the position of each alphabet in the characters. Taking the input of English alphabets, we introduce a modified transformer-based model to learn the position relationship between each alphabet and predict the transformation parameters for each part to reassemble them as a Chinese character. We show the comparison results between our predicted characters and corresponding characters created by the artist to indicate our proposed model has a good performance on this task, and we also created new characters to show the “creativity” of our model.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114473963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Prediction-Driven Dynamics Simulation to Mitigate Frame Time Variation 缓解帧时间变化的实时预测驱动动力学仿真
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488633
Mackinnon Buck, C. Eckhardt
{"title":"Real-Time Prediction-Driven Dynamics Simulation to Mitigate Frame Time Variation","authors":"Mackinnon Buck, C. Eckhardt","doi":"10.1145/3476124.3488633","DOIUrl":"https://doi.org/10.1145/3476124.3488633","url":null,"abstract":"This work introduces a prediction-driven real-time dynamics method that uses a graph-based state buffer to minimize the cost of mispredictions. Our technique reduces the average time needed for dynamics computation on the main thread by running the solver pipeline on a separate thread, enabling interactive multimedia applications to increase the computational budget for graphics at no cost perceptible to the end user.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121776460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Associating Real Objects with Virtual Models for VR Interaction 关联真实对象与虚拟模型的VR交互
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488654
Wan-Ting Hsu, I-Chen Lin
{"title":"Associating Real Objects with Virtual Models for VR Interaction","authors":"Wan-Ting Hsu, I-Chen Lin","doi":"10.1145/3476124.3488654","DOIUrl":"https://doi.org/10.1145/3476124.3488654","url":null,"abstract":"In this paper, we present a prototype system that is capable of associating real objects with virtual models and turning the table top into imaginary virtual scenes. A user can interact with these objects when she or he is immersed in the virtual environment. To accomplish this goal, a vision-based system is developed to online recognize and track the real objects in the scene. The corresponding virtual models are retrieved based on their tactile shapes. They are displayed and moved on a head-mounted display (HMD) according to tracked object poses. The experiment demonstrates that our prototype system can find reasonable association between real and virtual objects, and users are interested in the novel interaction.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120846188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Dynamic and Occlusion-Robust Light Field Illumination 动态和遮挡-鲁棒光场照明
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488624
M. Yasui, Yoshihiro Watanabe, M. Ishikawa
{"title":"Dynamic and Occlusion-Robust Light Field Illumination","authors":"M. Yasui, Yoshihiro Watanabe, M. Ishikawa","doi":"10.1145/3476124.3488624","DOIUrl":"https://doi.org/10.1145/3476124.3488624","url":null,"abstract":"There is high demand for dynamic and occlusion-robust illumination to improve lighting quality for portrait photography and assembly. Multiple projectors are required for the light field to achieve such illumination. This paper proposes a dynamic and occlusion-robust illumination technique by employing a light field formed by a lens array instead of using multiple projectors. Dynamic illumination is obtained by introducing a feedback system that follows the motion of the object. The designed lens array incorporates a wide viewing angle, making the system robust against occlusion. The proposed system was evaluated through projections onto a dynamic object.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123925946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Occlusion Robust Part-aware Object Classification through Part Attention and Redundant Features Suppression 基于局部注意和冗余特征抑制的遮挡鲁棒局部感知目标分类
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488647
Sohee Kim, Seungkyu Lee
{"title":"Occlusion Robust Part-aware Object Classification through Part Attention and Redundant Features Suppression","authors":"Sohee Kim, Seungkyu Lee","doi":"10.1145/3476124.3488647","DOIUrl":"https://doi.org/10.1145/3476124.3488647","url":null,"abstract":"In recent studies, object classification with deep convolutional neural networks has shown poor generalization with occluded objects due to the large variation of occlusion situations. We propose a part-aware deep learning approach for occlusion robust object classification. To demonstrate the robustness of the method to unseen occlusion, we train our network without occluded object samples in training and test it with diverse occlusion samples. Proposed method shows improved classification performance on CIFAR10, STL10, and vehicles from PASCAL3D+ datasets.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121970191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TIEboard: Developing Kids Geometric Thinking through Tangible User Interface TIEboard:通过有形用户界面培养孩子的几何思维
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488623
Arooj Zaidi, Junichi Yamaoka
{"title":"TIEboard: Developing Kids Geometric Thinking through Tangible User Interface","authors":"Arooj Zaidi, Junichi Yamaoka","doi":"10.1145/3476124.3488623","DOIUrl":"https://doi.org/10.1145/3476124.3488623","url":null,"abstract":"This research is based on the concept of computing being embedded within the tangible product that acts as both input and output device eliminating the need of traditional computers for any feedback or guidance.The idea is inspired from traditional geoboard that focuses on the age group from 5 to 8 years old. The main goal is to integrate technology seamlessly into physical manipulative and while using this product kids will be able to make complex shapes that offer kids with memorable learning experience.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122967115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
eyemR-Talk: Using Speech to Visualise Shared MR Gaze Cues eyer - talk:使用语音来可视化共享的MR凝视线索
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488618
Allison Jing, Brandon J. Matthews, K. May, Thomas J. Clarke, Gun A. Lee, M. Billinghurst
{"title":"eyemR-Talk: Using Speech to Visualise Shared MR Gaze Cues","authors":"Allison Jing, Brandon J. Matthews, K. May, Thomas J. Clarke, Gun A. Lee, M. Billinghurst","doi":"10.1145/3476124.3488618","DOIUrl":"https://doi.org/10.1145/3476124.3488618","url":null,"abstract":"In this poster we present eyemR-Talk, a Mixed Reality (MR) collaboration system that uses speech input to trigger shared gaze visualisations between remote users. The system uses 360° panoramic video to support collaboration between a local user in the real world in an Augmented Reality (AR) view and a remote collaborator in Virtual Reality (VR). Using specific speech phrases to turn on virtual gaze visualisations, the system enables contextual speech-gaze interaction between collaborators. The overall benefit is to achieve more natural gaze awareness, leading to better communication and more effective collaboration.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127417562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Visibility Enhancement for Transmissive Image using Synchronized Side-by-side Projector–Camera System 利用同步并行投影-摄像机系统增强透射图像的可视性
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488622
Kazuto Ogiwara, Hiroyuki Kubo
{"title":"Visibility Enhancement for Transmissive Image using Synchronized Side-by-side Projector–Camera System","authors":"Kazuto Ogiwara, Hiroyuki Kubo","doi":"10.1145/3476124.3488622","DOIUrl":"https://doi.org/10.1145/3476124.3488622","url":null,"abstract":"Extracting the direct light component from light transport helps to enhance the visibility of a scene. In this paper, we describe a method to improve the visibility of the target object by capturing transmissive rays without scattering rays using a synchronized projector–camera system. A rolling shutter camera and a laser raster scanning projector are placed side-by-side, and both epipolar planes are optically aligned on a screen plane which is place between the projector and camera. This paper demonstrates that our method can visualize an internal object inside diluted milk.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121193970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Stylized Neural Painter 自我风格化的神经画家
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488617
Qian Wang, Cai Guo, Hongning Dai, Ping Li
{"title":"Self-Stylized Neural Painter","authors":"Qian Wang, Cai Guo, Hongning Dai, Ping Li","doi":"10.1145/3476124.3488617","DOIUrl":"https://doi.org/10.1145/3476124.3488617","url":null,"abstract":"This work introduces Self-Stylized Neural Painter (SSNP) creating stylized artworks in a stroke-by-stroke manner. SSNP consists of digit artist, canvas, style-stroke generator (SSG). By using SSG to generate style strokes, SSNP creates different styles paintings based on the given images. We design SSG as a three-player game based on a generative adversarial network to produce pure-color strokes that are crucial for mimicking the physical strokes. Furthermore, the digital artist adjusts parameters of strokes (shape, size, transparency, and color) to reconstruct as much detailed content of the reference image as possible to improve the fidelity.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124551001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
HWAuth: Handwriting-Based Socially-Inclusive Authentication 基于手写体的社会包容性认证
SIGGRAPH Asia 2021 Posters Pub Date : 2021-12-14 DOI: 10.1145/3476124.3488638
Joon Kuy Han, Byungkon Kang, Dennis Wong
{"title":"HWAuth: Handwriting-Based Socially-Inclusive Authentication","authors":"Joon Kuy Han, Byungkon Kang, Dennis Wong","doi":"10.1145/3476124.3488638","DOIUrl":"https://doi.org/10.1145/3476124.3488638","url":null,"abstract":"Small, local group of users who share private resources (e.g., families, university labs, business departments) usually have limited usable authentication needs. For these entities, existing authentication solutions either require excessive personal information (e.g., biometrics), do not distinguish each user (e.g., shared passwords), or lack security measures when the access key is compromised (e.g., physical keys). We propose an alternative solution by designing HWAuth: an inclusive group authentication system with a shared text that is uniquely identifiable for each user. Each user shares the same textual password, but individual handwriting styles of the text are used to distinguish each user. We evaluated the usability and security of our design through a user study with 30 participants. Our results suggest that (1) users who enter the same shared passwords are discernible from one another, and (2) that users were able to consistently login using HWAuth.","PeriodicalId":199099,"journal":{"name":"SIGGRAPH Asia 2021 Posters","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134045590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信