ACM SIGGRAPH 2019 Posters最新文献

筛选
英文 中文
Massively parallel layout generation in real time 实时大规模并行布局生成
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338596
Vineet Batra, Ankit Phogat, T. Beri
{"title":"Massively parallel layout generation in real time","authors":"Vineet Batra, Ankit Phogat, T. Beri","doi":"10.1145/3306214.3338596","DOIUrl":"https://doi.org/10.1145/3306214.3338596","url":null,"abstract":"Conceiving an artwork requires designers to create assets and organize (or layout) them in a harmonious, self-orating story. While creativity is fundamental to both aspects, the latter can be bolstered with automated techniques. We present a first true SIMD formulation for the layout generation and leverage CUDA-enabled GPU to scan through millions of possible permutations and rank them on aesthetic appeal using weighted parameters such as symmetry, alignment, density, size balance, etc. The entire process happens in real-time using a GPU-accelerated implementation of replica exchange Monte Carlo Markov Chain method. The exploration of design space is rapidly narrowed by performing distant jumps from poorly ranked layouts, and fine tuning the highly ranked ones. Several iterations are carried out until desired rank or system convergence is achieved. In contrast to existing approaches, our technique generates aesthetically better layouts and runs more than two orders of magnitude faster.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130061616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Convergent turbulence refinement toward irrotational vortex 面向无旋转涡的收敛湍流精化
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338605
Xiaokun Wang, Sinuo Liu, X. Ban, Yanrui Xu, Jing Zhou, Cong-cong Wang
{"title":"Convergent turbulence refinement toward irrotational vortex","authors":"Xiaokun Wang, Sinuo Liu, X. Ban, Yanrui Xu, Jing Zhou, Cong-cong Wang","doi":"10.1145/3306214.3338605","DOIUrl":"https://doi.org/10.1145/3306214.3338605","url":null,"abstract":"We proposed a detail refinement method to enhance the visual effect of turbulence in irrotational vortex. We restore the missing angular velocity from the particles and convert them into linear velocity to recover turbulent detail due to numerical disspation.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"84 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131069892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconsideration of ouija board motion in terms of haptic illusions (IV): effect of haptic cue and another player 从触觉幻觉的角度重新考虑灵媒板的运动(四):触觉线索和另一个玩家的影响
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338567
Takahiro Shitara, Vibol Yem, H. Kajimoto
{"title":"Reconsideration of ouija board motion in terms of haptic illusions (IV): effect of haptic cue and another player","authors":"Takahiro Shitara, Vibol Yem, H. Kajimoto","doi":"10.1145/3306214.3338567","DOIUrl":"https://doi.org/10.1145/3306214.3338567","url":null,"abstract":"The Ouija board game is associated with a type of involuntary motion known as an ideomotor action. We sought to clarify the conditions under which this motion occurs by evaluating the effect that visual and haptic movement cues have on its occurrence. Using our lateral skin deformation device, we found that the simultaneous presentation of visual and tactile illusory motion and force produced larger ideomotor actions than when either modality presented alone, an effect that was further potentiated by the presence of another player (an avatar).","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115714705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effectiveness of facial animated avatar and voice transformer in elearning programming course 面部动画化身和语音变换在电子编程教学中的有效性
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338540
Rex Hsieh, Akihiko Shirai, Hisashi Sato
{"title":"Effectiveness of facial animated avatar and voice transformer in elearning programming course","authors":"Rex Hsieh, Akihiko Shirai, Hisashi Sato","doi":"10.1145/3306214.3338540","DOIUrl":"https://doi.org/10.1145/3306214.3338540","url":null,"abstract":"The advancement in technology brought about the introduction of eLearning to educational institutes. By supplementing traditional courses with eLearning materials, instructors are able to introduce new learning methods without completely deviating from standard education programs [Basogain et al. 2017]. Some of the most popular forms of E-Learning include online courses [Aparicio and Bacao 2013], [Goyal 2012], video clips of lectures, and gamification of courses and materials [Plessis 2017]. This paper introduces and evaluates the performance of eLearning videos featuring anime styled avatars (a.k.a VTuber) speaking in vocoder transformed audios and how they compare with the traditional lecturer videos.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124970959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Display methods of projection augmented reality based on deep learning pose estimation 基于深度学习姿态估计的投影增强现实显示方法
ACM SIGGRAPH 2019 Posters Pub Date : 2019-07-28 DOI: 10.1145/3306214.3338608
Hyocheol Ro, Yoonjung Park, Junghyun Byun, T. Han
{"title":"Display methods of projection augmented reality based on deep learning pose estimation","authors":"Hyocheol Ro, Yoonjung Park, Junghyun Byun, T. Han","doi":"10.1145/3306214.3338608","DOIUrl":"https://doi.org/10.1145/3306214.3338608","url":null,"abstract":"In this paper, we propose three display methods for projection-based augmented reality. In spatial augmented reality (SAR), determining where information, objects, or contents are to be displayed is a difficult and important issue. We use deep learning models to estimate user pose and suggest ways to solve the issue based on this data. Finally, each method can be appropriately applied according to various the applications and scenarios.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132188973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
MagniFinger
ACM SIGGRAPH 2019 Posters Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311859
Noriyasu Obushi, S. Wakisaka, Shunichi Kasahara, Atsushi Hiyama, Masahiko Inami
{"title":"MagniFinger","authors":"Noriyasu Obushi, S. Wakisaka, Shunichi Kasahara, Atsushi Hiyama, Masahiko Inami","doi":"10.1145/3311823.3311859","DOIUrl":"https://doi.org/10.1145/3311823.3311859","url":null,"abstract":"By adulthood, our fingers have developed a high level of dexterity: sensory and motor skills that developers have only just started to make use of in modern interfaces. Previous research has unveiled the possibilities of enhancing touch modalities by introducing visual feedback of the magnified touch image. Yet, most of the microscopes on the market require a complicated procedure to operate and this makes it difficult to move the felt/observed area. To address this, we introduce MagniFinger, a new finger-based microscope that allows users to magnify the contacting surface on their fingertips using two means of control: sliding and tilting. The tilting-based control enables a more precise movement under micro-environments. According to the results of our experiments, it shortens the time of reaching targets compared to the simple sliding-based control.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115983047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
MagicPAPER
ACM SIGGRAPH 2019 Posters Pub Date : 2018-05-30 DOI: 10.1145/3279778.3279914
Qin Wu, Jiayuan Wang, Sirui Wang, Tong Su, Chenmei Yu
{"title":"MagicPAPER","authors":"Qin Wu, Jiayuan Wang, Sirui Wang, Tong Su, Chenmei Yu","doi":"10.1145/3279778.3279914","DOIUrl":"https://doi.org/10.1145/3279778.3279914","url":null,"abstract":"As the most common writing material in our daily life, paper is an important carrier of traditional painting, and it also has a more comfortable physical touch than electronic screens. In this study, we designed a shadow-art device for human-computer interaction called MagicPAPER, which is based on physical touch detection, gesture recognition, and reality projection. MagicPAPER consists of a pen, kraft paper, and several detection devices, such as AirBar, Kinect, LeapMotion, and WebCam. To make our MagicPAPER more interesting, we developed more than a dozen applications that allow users to experience and explore creative interactions on a desktop with a pen and a piece of paper. Results of our user study showed that MagicPAPER has received positive feedback from many different types of users, particularly children.","PeriodicalId":216038,"journal":{"name":"ACM SIGGRAPH 2019 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129626778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信